100+ DevOps Essential concepts

+
🔄 CI/CD
+
#Continuous Integration (CI): The practice of merging all developers' working copies to a shared mainline several times a day. #Continuous Deployment (CD): The practice of releasing every change to customers through an automated pipeline.
🏗 Infrastructure as Code (IaC)
+
The process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.
📚 Version Control Systems
+
#Git: A distributed version control system for tracking changes in source code during software development. #Subversion: A centralized version control system characterized by its reliability as a safe haven for valuable data.
🔬 Test Automation
+
#_Test Automation involves the use of special software (separate from the software being tested) to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Automated testing can extend the depth and scope of tests to help improve software quality. #_It involves automating a manual process necessary for the testing phase of the software development lifecycle. These tests can include functionality testing, performance testing, regression testing, and more. #_The goal of test automation is to increase efficiency, effectiveness, and coverage of software testing with the least amount of human intervention. It allows for the repeated running of these tests, which would be otherwise difficult to perform manually. #_Test automation is a critical part of Continuous Integration and Continuous Deployment (CI/CD) practices, as it enables frequent and consistent testing to catch issues as early as possible.
⚙️ Configuration Management
+
The process of systematically handling changes to a system in a way that it maintains integrity over time.
📦 Containerization
+
#Docker: An open-source platform that automates the deployment, scaling, and management of applications. #Kubernetes: An open-source system for automating deployment, scaling, and management of containerized applications.
👀 Monitoring and Logging
+
The process of checking the status or progress of something over time and maintaining an ordered record of events.
🧩 Microservices
+
An architectural style that structures an application as a collection of services that are highly maintainable and testable.
📊 DevOps Metrics
+
Key Performance Indicators (KPIs) used to evaluate the effectiveness of a DevOps team, like deployment frequency or mean time to recovery.
☁ Cloud Computing
+
#AWS: Amazon's cloud computing platform that provides a mix of infrastructure as a service (IaaS), platform as a service (PaaS), and packaged software as a service (SaaS) offerings. #Azure: Microsoft's public cloud computing platform. #GCP: Google's suite of cloud computing services that runs on the same infrastructure that Google uses internally for its end-user products.
🔒 Security in DevOps (DevSecOps)
+
The philosophy of integrating security practices within the DevOps process.
🗃 GitOps
+
A way of implementing Continuous Deployment for cloud native applications, using Git as a 'single source of truth'.
🌍 Declarative System
+
In a declarative system, the desired system state is described in a file (or set of files), and it's the system's responsibility to achieve this state. This contrasts with an imperative system, where specific commands are executed to reach the desired state. GitOps relies on declarative specifications to manage system configurations.
🔄 Convergence
+
In the context of GitOps, convergence refers to the process of the system moving towards the desired state, as described in the Git repository. When changes are made to the repository, automated processes reconcile the current system state with the desired state.
🔁 Reconciliation Loops
+
In GitOps, reconciliation loops are the continuous cycles of checking the current system state and applying changes to converge towards the desired state. These are often managed by Kubernetes operators or controllers.
💼 Configuration Drift
+
Configuration drift refers to the phenomenon where environments become inconsistent over time due to manual changes or updates. GitOps helps to avoid this by ensuring all changes are made in the Git repository and automatically applied to the system.
💻 Infrastructure as Code (IaC)
+
While this isn't exclusive to GitOps, IaC is a key component of the GitOps approach. Infrastructure as Code involves managing and provisioning computing resources through machine-readable definition files, rather than manual hardware configuration or interactive configuration tools. In GitOps, all changes to the system are made through the Git repository. This provides a clear audit trail of all changes, supports easy rollbacks, and ensures all changes are reviewed and approved before being applied to the system.
🚀 Canary Deployments
+
Canary deployments involve releasing new versions of a service to a small subset of users before rolling it out to all users. This approach, often used in conjunction with GitOps, allows teams to test and monitor the new version in a live environment with real users, reducing the risk of a full-scale deployment.
🚫💻 Serverless Architecture
+
A software design pattern where applications are hosted by a third-party service, eliminating the need for server software and hardware management. Agile Methodology An approach to project management, used in software development, that helps teams respond to the unpredictability of building software through incremental, iterative work cadences, known as sprints. IT Operations The set of all processes and services that are both provisioned by an IT staff to their internal or external clients and used by themselves.
📜 Scripting & Automation
+
The ability to write scripts in languages like Bash and Python to automate repetitive tasks.
🔨 Build Tools
+
Tools that automate the creation of executable applications from source code (e.g., Maven, Gradle, and Ant). Understanding the basics of networking is crucial for creating and managing applications in the Cloud.
⏱ Performance Testing
+
Testing conducted to determine how a system performs in terms of responsiveness and stability under a particular workload.
🔁 Load Balancing
+
The process of distributing network traffic across multiple servers to ensure no single server bears too much demand.
💻 Virtualization
+
The process of creating a virtual version of something, including virtual computer hardware systems, storage devices, and computer network resources.
🌍 Web Services
+
Services used by the network to send and receive data (e.g., REST and SOAP).
💾 Database Management
+
Understanding databases, their management, and their interaction with applications is a key skill (e.g., MySQL, PostgreSQL, MongoDB).
📈 Scalability
+
The capability of a system to grow and manage increased demand.
🔥 Disaster Recovery
+
The area of security planning that deals with protecting an organization from the effects of significant negative events.
🛡 Incident Management
+
The process to identify, analyze, and correct hazards to prevent a future re-occurrence. The process of managing the incoming and outgoing network traffic.
⚖ Capacity Planning
+
The process of determining the production capacity needed by an organization to meet changing demands for its products.
📝 Documentation
+
Creating high-quality documentation is a key skill for any DevOps engineer.
🧪 Chaos Engineering
+
The discipline of experimenting on a system to build confidence in the system's capability to withstand turbulent conditions in production.
🔐 Access Management
+
The process of granting authorized users the right to use a service, while preventing access to non-authorized users.
🔗 API Management
+
The process of creating, publishing, documenting, and overseeing APIs in a secure and scalable environment.
🧱 Architecture Design
+
The practice of designing the overall architecture of a software system.
🏷 Tagging Strategy
+
A strategy for tagging resources in cloud environments to keep track of ownership and costs.
🔍 Observability
+
The ability to infer the internal states of a system based on the outputs it produces. A storage space for binary and source code artifacts (e.g., JFrog Artifactory).
🧰 Toolchain Management
+
The process of selecting, integrating, and managing the right set of tools to support collaborative development, build, test, and release.
📟 On-call Duty
+
The responsibility of engineers to be available to troubleshoot and resolve issues that arise in a production environment.
🎛 Feature Toggles
+
A technique that allows teams to modify system behavior without changing code.
📑 License Management
+
The process of managing and optimizing the purchase, deployment, maintenance, utilization, and disposal of software applications within an organization.
🐳 Docker Images
+
Docker images are lightweight, stand-alone, executable packages that include everything needed to run a piece of software.
🔄 Kubernetes Pods
+
A pod is the smallest and simplest unit in the Kubernetes object model that you create or deploy.
🚀 Deployment Strategies
+
Techniques for updating applications, such as rolling updates, blue/green deployments, or canary releases.
⚙ YAML, JSON
+
These are data serialization languages often used for configuration files and in applications where data is being stored or transmitted. A software emulation of a physical computer, running an operating system and applications just like a physical computer.
💽 Disk Imaging
+
The process of copying the contents of a computer hard disk into a data file or disk image.
📚 Knowledge Sharing
+
A key aspect of DevOps culture, involving the sharing of knowledge and best practices across the organization.
🌐 Cloud Services Models
+
Different models of cloud services, including IaaS, PaaS, and SaaS.
💤 Idle Process Management
+
The management and removal of idle processes to free up resources.
🕸 Service Mesh
+
A dedicated infrastructure layer for handling service-to-service communication, often used in microservices architecture.
💼 Project Management Tools
+
Tools used for project management, like Jira, Trello, or Asana.
📡 Proxy Servers
+
Servers that act as intermediaries for requests from clients seeking resources from other servers.
🌁 Cloud Migration
+
The process of moving data, applications, and other business elements from an organization's onsite computers to the cloud.
� Hybrid Cloud
+
A cloud computing environment that uses a mix of on-premises, private cloud, and third-party, public cloud services with orchestration between the two platforms.
☸ Helm in Kubernetes
+
Helm is a package manager for Kubernetes that allows developers and operators to more easily package, configure, and deploy applications and services onto Kubernetes clusters.
🔒 Secure Sockets Layer (SSL)
+
A standard security technology for establishing an encrypted link between a server and a client.
👥 User Experience (UX)
+
The process of creating products that provide meaningful and relevant experiences to users.
🔄 Reverse Proxy
+
A type of proxy server that retrieves resources on behalf of a client from one or more servers.
👾 Anomaly Detection
+
The identification of rare items, events, or observations which raise suspicions by differing significantly from the majority of the data.
🗺 Site Reliability Engineering (SRE)
+
#_ A discipline that incorporates aspects of software engineering and applies them to infrastructure and operations problems. The main goals are to create scalable and highly reliable software systems. SRE is a role that was originated at Google to bridge the gap between development and operations by applying a software engineering mindset to system administration topics. SREs use software as a tool to manage systems, solve problems, and automate operations tasks. #_ The core principle of SRE is to treat operations as if it's a software problem. They define a set of work that includes automation, continuous integration/delivery, ensuring reliability and uptime, and enforcing performance. They work closely with product teams to advise on the operability of systems, ensure they are prepared for new releases and can scale to the demands of the business.
🔄 Autoscaling
+
A cloud computing feature that automatically adds or removes compute resources depending upon actual usage.
🔑 SSH (Secure Shell)
+
A cryptographic network protocol for operating network services securely over an unsecured network.
🧪 Test-Driven Development (TDD)
+
A software development process that relies on the repetition of a very short development cycle: requirements are turned into very specific test cases, then the code is improved so that the tests pass.
💡 Problem Solving
+
The process of finding solutions to difficult or complex issues.
💼 IT Service Management (ITSM)
+
The activities that are performed by an organization to design, plan, deliver, operate and control information technology (IT) services offered to customers.
👀 Peer Reviews
+
The evaluation of work by one or more people with similar competencies who are not the people who produced the work.
📊 Data Analysis
+
The process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making.
� UI Design
+
The process of making interfaces in software or computerized devices with a focus on looks or style.
🌐 Content Delivery Network (CDN)
+
A geographically distributed network of proxy servers and their data centers. Visual Regression Testing A form of regression testing that involves checking a system's graphical user interface (GUI) against previous versions.
🔄 Canary Deployment
+
A pattern for rolling out releases to a subset of users or servers.
📨 Messaging Systems
+
Communication systems for exchanging messages between distributed systems (e.g., RabbitMQ, Apache Kafka).
🔐 OAuth
+
An open standard for access delegation, commonly used as a way for Internet users to grant websites or applications access to their information on other websites but without giving them the passwords.
👤 Identity and Access Management (IAM)
+
A framework of business processes, policies and technologies that facilitates the management of electronic or digital identities.
🗄 NoSQL Databases
+
Database systems designed to handle large volumes of data that do not fit the traditional relational model (e.g., MongoDB, Cassandra).
🏝 Serverless Functions
+
Also known as Functions as a Service (FaaS), these are a type of cloud service that allows you to execute specific functions in response to events (e.g., AWS Lambda).
� Hexagonal Architecture
+
Also known as Ports and Adapters, this is a design pattern that favors the separation of concerns and loose coupling.
🔁 ETL (Extract, Transform, Load)
+
A data warehousing process that uses batch processing to help business users analyze and report on data relevant to their business focus.
📚 Data Warehousing
+
The process of constructing and using a data warehouse, which is a system used for reporting and data analysis.
📊 Big Data
+
Extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions.
🌩 Edge Computing
+
A distributed computing paradigm that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth.
🔍 Log Analysis
+
The process of reviewing and evaluating log files from various sources to identify trends or potential security threats.
🎛 Dashboarding
+
The process of creating a visual representation of data, which can be used to analyze and make decisions.
🔑 Key Management
+
The administrative control of creating, distributing, using, storing, and replacing cryptographic keys in a cryptosystem.
🔍 A/B Testing
+
A randomized experiment with two variants, A and B, which are the control and variation in the controlled experiment.
🔒 HTTPS (HTTP Secure)
+
An extension of the Hypertext Transfer Protocol. It is used for secure communication over a computer network, and is widely used on the Internet.
🌐 Web Application Firewall (WAF)
+
A firewall that monitors, filters, or blocks data packets as they travel to and from a web application.
🔏 Single Sign-On (SSO)
+
An authentication scheme that allows a user to log in with a single ID and password to any of several related, yet independent, software systems.
🔁 Blue-Green Deployment
+
A release management strategy that reduces downtime and risk by running two identical production environments called Blue and Green.
🌁 Fog Computing
+
A decentralized computing infrastructure in which data, compute, storage, and applications are distributed in the most logical, efficient place between the data source and the cloud.
⛓ Blockchain
+
#_ Blockchain is a type of distributed ledger technology that maintains a growing list of records, called blocks, that are linked using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data. #_ The design of a blockchain is inherently resistant to data modification. Once recorded, the data in any given block cannot be altered retroactively without alteration of all subsequent blocks. This makes blockchain technology suitable for the recording of events, medical records, identity management, transaction processing, and documenting provenance, among other things.
🚀 Progressive Delivery
+
A methodology that focuses on delivering new functionality gradually to prevent issues and minimize risk.
📝 RFC (Request for Comments)
+
A type of publication from the technology community that describes methods, behaviors, research, or innovations applicable to the working of the Internet and Internet-connected systems.
🔗 REST (Representational State Transfer)
+
An architectural style for designing networked applications, often used in web services development.
🔑 Secrets Management
+
The process of managing digital authentication credentials like passwords, keys, and tokens.
🔐 HSM (Hardware Security Module)
+
A physical computing device that safeguards and manages digital keys, performs encryption and decryption functions for digital signatures, strong authentication and other cryptographic functions.
⛅ Cloud-native Technologies
+
Technologies that empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.
⚠ Vulnerability Scanning
+
The process of inspecting potential points of exploit on a computer or network to identify security holes.
🔗 Microservices
+
An architectural style that structures an application as a collection of loosely coupled services, which implement business capabilities. An open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object.
🔬 Benchmarking
+
The practice of comparing business processes and performance metrics to industry bests and best practices from other companies.
🌉 Cross-Functional Collaboration
+
Collaboration between different functional areas within an organization to achieve common goals.

ADO.NET

+
Access data from DataReader?
+
Call ExecuteReader() and iterate rows using Read(). Access values using index or column names. It is forward-only and read-only.
ADO.NET Components.
+
Key components are:, · Connection, · Command, · DataReader, · DataAdapter, · DataSet, Each helps in performing database operations efficiently.
ADO.NET Data Provider?
+
A Data Provider is a set of classes (Connection, Command, DataAdapter, DataReader) that interacts with a specific database like SQL Server, Oracle, or OleDb.
ADO.NET Data Providers?
+
Examples:, · SqlClient, · OleDb, · Odbc, · OracleClient
ADO.NET?
+
ADO.NET is a set of classes in the .NET framework used to access and manipulate data from data sources such as SQL Server, Oracle, and XML.
ADO.NET?
+
ADO.NET is a .NET framework component used to interact with databases. It provides disconnected and connected communication models and supports commands, data readers, connection objects, and datasets.
ADO.NET?
+
ADO.NET is a data access framework in .NET used to interact with databases. It supports connected and disconnected models and works with SQL Server, Oracle, and others.
ADO.NET?
+
ADO.NET is a data access framework in .NET for interacting with databases using DataReader, DataSet, and DataAdapter.
Advantages of ADO.NET?
+
Supports disconnected model, XML integration, scalable architecture, and high performance. Works with multiple data sources and provides secure parameterized queries.
Aggregate in LINQ?
+
Perform operations like Sum, Count, Min, Max, Average on collections.
Authentication techniques for SQL Server
+
Common authentication types are Windows Authentication, SQL Server Authentication, and Mixed Mode Authentication.
Benefits of ADO.NET?
+
Scalable, secure, supports XML, disconnected architecture, multiple DB providers.
Best method to get two values
+
Use ExecuteReader() or stored procedure returning multiple columns.
BindingSource class in ADO.NET?
+
BindingSource acts as a mediator between UI and data. It simplifies sorting, filtering, and navigation with data controls like DataGridView.
boxing and unboxing?
+
Boxing converts a value type into object type. Unboxing extracts the value back.
Boxing/unboxing?
+
Boxing: value type → object, Unboxing: object → value type
Can multiple tables be loaded into a DataSet?
+
Yes, multiple tables can be loaded into a DataSet using DataAdapter.Fill(), and relationships can be defined between them.
Catch multiple exceptions at once?
+
Use catch(Exception ex) when(ex is X || ex is Y) or multiple catch blocks.
Classes available in System.Data Namespace
+
Includes DataSet, DataTable, DataRow, DataColumn, DataRelation, Constraint, and DataView.
Classes in System.Data.Common Namespace
+
Includes DbConnection, DbCommand, DbDataAdapter, DbDataReader, and DbParameter, offering provider-independent access.
Clear(), Clone(), Copy() in DataSet?
+
Clear(): removes all data, keeps schema, Clone(): copies schema only, Copy(): copies schema + data
Clone() method of DataSet?
+
Clone() copies the structure of a DataSet including tables, schemas, and constraints. It does not copy data. It is used when the same schema is needed for new datasets.
Command object in ADO.NET?
+
Command object represents an SQL statement or stored procedure to execute against a data source.
Commands used with DataAdapter
+
DataAdapter uses SelectCommand, InsertCommand, UpdateCommand, and DeleteCommand for CRUD operations. These commands define how data is fetched and updated between DataSet and database.
Components of ADO.NET Data Provider
+
ADO.NET Data Provider consists of four main objects: Connection, Command, DataReader, and DataAdapter. The Connection connects to the database, Command executes SQL, DataReader retrieves forward-only data, and DataAdapter fills DataSets and updates changes.
Concurrency in EF?
+
Manages simultaneous access to data using Optimistic or Pessimistic concurrency.
Connection object in ADO.NET?
+
Connection object represents a connection to a data source and is used to open and close connections.
Connection object properties and members?
+
Common properties include ConnectionString, State, Database, ServerVersion, and DataSource. Methods include Open(), Close(), CreateCommand(), and BeginTransaction().
Connection Object?
+
The connection object establishes communication between application and database. It includes connection strings and manages session initiation and termination.
Connection pooling in ADO.NET?
+
Connection pooling reuses active connections to improve performance instead of opening a new connection every time.
Connection Pooling in ADO.NET?
+
Connection pooling reuses existing database connections instead of creating new ones repeatedly. It improves performance and reduces overhead by efficiently managing active and idle connections.
Connection Pooling?
+
Reuses previously opened DB connections to reduce overhead and improve scalability.
Connection pooling?
+
Reuses open database connections to improve performance and scalability.
Connection timeout in ADO.NET?
+
Connection timeout specifies the time to wait while establishing a connection before throwing an exception.
ConnectionString?
+
Defines DB server, database name, credentials, and options for establishing connection.
Copy() method of DataSet?
+
Copy() creates a duplicate DataSet including structure and data. It is useful when preserving a dataset snapshot.
Create and Manage Connections in ADO.NET?
+
Use classes like SqlConnection with a valid connection string. Methods such as Open() and Close() handle connection lifecycle, often used inside using(){} blocks.
Create SqlConnection?
+
SqlConnection con = new SqlConnection("connectionString");, con.Open();
DAO?
+
DAO (Data Access Object) is a design pattern used to abstract and encapsulate database access logic. It helps separate persistence logic from business logic.
Data Providers in ADO.NET
+
Examples include SqlClient, OleDb, OracleClient, Odbc, and EntityClient.
DataAdapter and its Property?
+
DataAdapter is used to transfer data between database and DataSet. Properties include SelectCommand, InsertCommand, UpdateCommand, and DeleteCommand.
DataAdapter in ADO.NET?
+
DataAdapter acts as a bridge between a DataSet and a data source for retrieving and saving data.
DataAdapter in ADO.NET?
+
DataAdapter acts as a bridge between the database and DataSet. It uses select, insert, update, and delete commands to sync data between memory and the database.
DataAdapter?
+
Acts as a bridge between DataSet and database for retrieving and updating data.
DataAdapter?
+
Acts as a bridge between DataSet and DB, provides methods for Fill() and Update().
DataColumn, DataRow, DataTable relationship?
+
DataTable holds rows and columns; DataRow is a record; DataColumn defines schema.
DataReader in ADO.NET?
+
DataReader is a forward-only, read-only stream of data from a data source, optimized for performance.
DataReader Object?
+
A fast, forward-only, read-only way to retrieve data from a database. Works in connected mode.
DataReader?
+
A DataReader provides fast, forward-only reading of results from a query. It keeps the connection open while reading data, making it ideal for large datasets.
DataReader?
+
Forward-only, read-only, fast access to database records.
DataRelation Class?
+
It establishes parent-child relational mapping between DataTables inside a DataSet, similar to foreign keys in a database.
DataSet in ADO.NET?
+
DataSet is an in-memory, disconnected collection of data tables, relationships, and constraints.
Dataset Object?
+
A disconnected, in-memory collection of DataTables supporting relationships and XML.
DataSet replaces ADO Recordset?
+
Dataset provides disconnected, XML-based storage, supporting multiple tables, relationships, and offline editing. Unlike Recordset, it does not require a live database connection.
DataSet?
+
An in-memory representation of tables, relationships, and constraints, supports disconnected data.
DataTable in ADO.NET?
+
DataTable is a single in-memory table of data in a DataSet.
DataTable in ADO.NET?
+
A DataTable stores rows and columns similar to a database table. It exists in memory and can be part of a DataSet, supporting constraints, relations, and indexing.
DataView in ADO.NET?
+
DataView provides a customizable view of a DataTable, allowing sorting, filtering, and searching.
DataView?
+
A DataView provides a sorted, filtered view of a DataTable without modifying the actual data. It supports searching and custom ordering.
DataView?
+
DataView provides filtered and sorted views of a DataTable without modifying original data.
Default CommandTimeout value
+
The default value of CommandTimeout is 30 seconds.
Define DataSet structure?
+
A DataSet stores relational data in memory as tables, relations, and constraints. It can contain multiple DataTables and supports XML schema definitions using ReadXmlSchema() and WriteXmlSchema().
DifBet AcceptChanges() and RejectChanges() in DataSet?
+
AcceptChanges() commits changes to DataSet; RejectChanges() rolls back changes.
DifBet AcceptChanges() and RejectChanges()?
+
AcceptChanges commits changes; RejectChanges reverts changes to original state.
DifBet ADO and ADO.NET?
+
ADO is COM-based and works with connected architecture; ADO.NET is .NET-based and supports both connected and disconnected architecture.
DifBet BeginTransaction() and EnlistTransaction()?
+
BeginTransaction starts a local transaction; EnlistTransaction enrolls the connection in a distributed transaction.
DifBet Close() and Dispose() on SqlConnection?
+
Close() closes the connection; Dispose() releases all resources used by the connection object.
DifBet CommandBehavior.CloseConnection and default behavior?
+
CloseConnection automatically closes connection when DataReader is closed; default keeps connection open.
DifBet CommandType.Text and CommandType.StoredProcedure?
+
CommandType.Text executes raw SQL queries; CommandType.StoredProcedure executes stored procedures.
DifBet connected and disconnected architecture in ADO.NET?
+
Connected architecture uses active database connection (DataReader); disconnected architecture uses in-memory objects (DataSet).
DifBet connected and disconnected DataSet updates?
+
Connected updates immediately affect the database; disconnected updates require calling DataAdapter.Update().
DifBet connection string and connection object?
+
Connection string contains parameters to connect to database; connection object uses connection string to establish connection.
DifBet DataAdapter.Fill(DataSet) and Fill(DataTable)?
+
Fill(DataSet) can load multiple tables; Fill(DataTable) loads single table.
DifBet DataAdapter.MissingSchemaAction.AddWithKey and Add?
+
AddWithKey loads primary key info; Add loads only columns without keys.
DifBet DataAdapter.Update() and SqlCommand.ExecuteNonQuery()?
+
Update() propagates DataSet changes; ExecuteNonQuery executes a single SQL command.
DifBet DataColumn.Expression and DataTable.Compute()?
+
DataColumn.Expression defines calculated column in DataTable; Compute evaluates expression on-demand.
DifBet DataReader and DataAdapter?
+
DataReader is forward-only, read-only, connected; DataAdapter works with DataSet in disconnected mode.
DifBet DataReader and DataSet?
+
DataReader is connected, fast, and read-only; DataSet is disconnected, can hold multiple tables, and supports updates.
DifBet DataRowState.Added, Modified, Deleted, and Unchanged?
+
Added: new row; Modified: updated row; Deleted: marked for deletion; Unchanged: no changes.
DifBet DataSet and DataTable?
+
DataSet can hold multiple tables and relationships; DataTable represents a single table.
DifBet DataSet.EnforceConstraints = true and false?
+
True enforces constraints (keys, relationships); false disables constraint checking temporarily.
DifBet DataSet.GetChanges() and DataSet.AcceptChanges()?
+
GetChanges() returns a copy of changes made; AcceptChanges() commits changes to DataSet.
DifBet DataSet.Merge() and ImportRow()?
+
Merge combines two DataSets while preserving changes; ImportRow copies a single DataRow into another DataTable.
DifBet DataSet.ReadXml() and DataSet.WriteXml()?
+
ReadXml loads data from XML; WriteXml saves data to XML.
DifBet DataSet.ReadXmlSchema() and DataSet.WriteXmlSchema()?
+
ReadXmlSchema reads only schema; WriteXmlSchema writes only schema to XML.
DifBet DataSet.Relations.Add() and DataTable.ChildRelations?
+
Relations.Add() creates relationship between tables; ChildRelations shows existing child relations.
DifBet DataSet.Tables and DataSet.Tables[TableName"]?"
+
Tables returns collection of all tables; Tables[TableName"] returns specific table."
DifBet DataTable.Compute() and DataView.RowFilter?
+
Compute evaluates expressions like SUM, COUNT; RowFilter filters rows dynamically.
DifBet DataTable.NewRow() and DataTable.Rows.Add()?
+
NewRow() creates a new DataRow; Rows.Add() adds DataRow to DataTable.
DifBet DataTable.Select() and DataView.RowFilter?
+
DataTable.Select() returns an array of DataRows; DataView.RowFilter filters rows dynamically in a DataView.
DifBet disconnected DataSet and connected DataReader?
+
DataSet is disconnected and can store multiple tables; DataReader is connected, forward-only, and read-only.
DifBet disconnected DataSet and XML in ADO.NET?
+
DataSet stores relational data in memory; XML stores hierarchical data in a text format.
DifBet ExecuteReader, ExecuteScalar, and ExecuteNonQuery?
+
ExecuteReader returns a DataReader; ExecuteScalar returns a single value; ExecuteNonQuery executes commands like INSERT, UPDATE, DELETE.
DifBet ExecuteScalar() and ExecuteNonQuery()?
+
ExecuteScalar returns a single value; ExecuteNonQuery returns number of rows affected.
DifBet ExecuteXmlReader() and ExecuteReader()?
+
ExecuteXmlReader() returns XML data as XmlReader; ExecuteReader() returns relational data as DataReader.
DifBet Fill() and Update() methods in DataAdapter?
+
Fill() populates a DataSet with data from a data source; Update() saves changes from a DataSet back to the data source.
DifBet FillSchema() and Fill() in DataAdapter?
+
FillSchema() loads structure (columns, constraints); Fill() loads data into DataSet.
DifBet GetSchema() and DataTable.Columns?
+
GetSchema() retrieves database metadata; DataTable.Columns retrieves column info of DataTable.
DifBet Load() and Fill() in DataAdapter?
+
Load() loads data into DataTable directly; Fill() loads data into DataSet.
DifBet multiple ResultSets and DataSet.Tables?
+
Multiple ResultSets are multiple queries from database; DataSet.Tables stores multiple tables in memory.
DifBet optimistic concurrency using Timestamp and original values?
+
Timestamp compares version number for updates; original values compare previous data values.
DifBet ReadOnly and ReadWrite DataSet?
+
ReadOnly DataSet cannot update the source; ReadWrite DataSet allows changes to be persisted back.
DifBet schema-only and key information loading?
+
Schema-only loads column structure; key information includes primary, foreign keys, and constraints.
DifBet SqlBulkCopy and DataAdapter.Update()?
+
SqlBulkCopy is fast bulk insert; DataAdapter.Update() updates based on DataRow changes.
DifBet SqlCommand and OleDbCommand?
+
SqlCommand is SQL Server-specific; OleDbCommand works with OLE DB providers for multiple databases.
DifBet SqlCommand.ExecuteReader(CommandBehavior) options?
+
Options like SingleRow, SingleResult, CloseConnection modify behavior of DataReader.
DifBet SqlCommand.Parameters.Add() and AddWithValue()?
+
Add() allows specifying type and size; AddWithValue() infers type from value.
DifBet SqlCommandBuilder and manually writing SQL commands?
+
CommandBuilder automatically generates INSERT, UPDATE, DELETE commands; manual SQL provides more control.
DifBet SqlConnection and OleDbConnection?
+
SqlConnection is specific to SQL Server; OleDbConnection is generic and can connect to multiple databases via OLE DB provider.
DifBet SqlDataAdapter and OleDbDataAdapter?
+
SqlDataAdapter is SQL Server-specific; OleDbDataAdapter works with OLE DB providers for multiple databases.
DifBet SqlDataAdapter and SqlDataReader?
+
DataAdapter works with disconnected DataSet; DataReader is connected and forward-only.
DifBet SqlDataAdapter.Fill() and SqlDataAdapter.FillSchema()?
+
Fill() loads data; FillSchema() loads table structure including constraints.
DifBet SqlDataReader and SqlDataAdapter?
+
SqlDataReader is connected, fast, and read-only; SqlDataAdapter works in disconnected mode with DataSet.
DifBet synchronous and asynchronous ADO.NET operations?
+
Synchronous operations block until complete; asynchronous operations run in background without blocking.
DifBet TableMapping and ColumnMapping?
+
TableMapping maps source table names to DataSet tables; ColumnMapping maps source columns to DataSet columns.
DifBet typed and untyped DataSet?
+
Typed DataSet has a predefined schema with compile-time checks; untyped is generic and dynamic.
DiffBet ADO and ADO.NET.
+
ADO is connected and recordset-based, whereas ADO.NET supports disconnected architecture using DataSet. ADO.NET is XML-based and works well with distributed applications.
DiffBet ADO and ADO.NET?
+
ADO uses connected model and Recordsets. ADO.NET supports disconnected model, XML, and multiple tables.
DiffBet Command and CommandBuilder
+
Command executes SQL statements, while CommandBuilder automatically generates SQL (Insert, Update, Delete) commands for DataAdapters.
DiffBet connected and disconnected model?
+
Connected: DataReader, requires live DB connection., Disconnected: DataSet, DataAdapter, works offline.
DiffBet DataReader and DataAdapter?
+
DataReader is forward-only, read-only; DataAdapter fills DataSet and supports disconnected operations.
DiffBet DataReader and DataSet.
+
· DataReader: forward-only, read-only, connected model, high performance., · DataSet: in-memory collection, disconnected model, supports navigation and editing.
DiffBet DataReader and Dataset?
+
DataReader is fast, connected, read-only; Dataset is disconnected, editable, and supports multiple tables.
DiffBet DataSet and DataReader.
+
(Already answered in Q5, summarized above.)
DiffBet DataSet and Recordset?
+
DataSet is disconnected, supports multiple tables and relationships., Recordset is connected and read-only or updatable depending on type.
DiffBet Dataset.Clone and Dataset.Copy
+
Clone() copies only the schema of the DataSet without data. Copy() duplicates both the schema and data, creating a full dataset replica.
DiffBet ExecuteScalar, ExecuteReader, ExecuteNonQuery?
+
Scalar: single value, Reader: forward-only rows, NonQuery: update/delete/insert.
DiffBet Fill() and Update()?
+
Fill() loads data from DB to DataSet; Update() writes changes back to DB.
DiffBet IQueryable and IEnumerable?
+
IQueryable: server-side execution, LINQ to SQL/Entities, IEnumerable: client-side, in-memory
DiffBet OLEDB and SQLClient Providers
+
OLEDB provider works with multiple data sources like Access, Oracle, and Excel, while SQLClient is optimized specifically for SQL Server. SQLClient offers better speed, security, and support for SQL Server features like stored procedures and transactions.
Difference: Response.Expires vs Response.ExpiresAbsolute
+
Expires specifies duration in minutes. ExpiresAbsolute sets exact expiration date/time.
Different Execute Methods in ADO.NET
+
Key execution methods include ExecuteReader() for row data, ExecuteScalar() for a single value, ExecuteNonQuery() for insert/update/delete operations, and ExecuteXmlReader() for XML data.
Disconnected data?
+
Disconnected data allows retrieving, modifying, and working with data without continuous DB connection. DataSet and DataTable support this model.
Dispose() in ADO.NET?
+
Releases unmanaged resources like DB connections, commonly used with using block.
Do we use stored procedures in ADO.NET?
+
Yes, stored procedures can be executed using the Command object by setting CommandType.StoredProcedure.
EF Migration?
+
Updates DB schema as models evolve without losing data.
Execute raw SQL in EF?
+
Use context.Database.SqlQuery() or ExecuteSqlCommand().
ExecuteNonQuery()?
+
This method executes commands that do not return results (Insert, Update, Delete). It returns the number of affected rows.
ExecuteNonQuery()?
+
Executes insert, update, or delete commands and returns affected row count.
ExecuteReader()?
+
Executes a query and returns a DataReader for reading rows forward-only.
ExecuteScalar()?
+
ExecuteScalar() returns a single value from a query, typically used for count, sum, or identity queries. It is faster than returning full data structures.
ExecuteScalar()?
+
Executes a query that returns a single value (first column of first row).
Explain DataTable, DataRow & DataColumn relationship.
+
DataTable stores rows and columns of data. DataRow represents a single record, while DataColumn defines the schema (fields). Together they form structured tabular data.
Explain ExecuteReader().
+
ExecuteReader returns a DataReader object to read result sets row-by-row in forward-only mode, ideal for performance in large data retrieval.
Explain ExecuteXmlReader?
+
ExecuteXmlReader is used with SQL Server to read XML data returned by a command. It returns an XmlReader object that allows forward-only streaming of XML. It is useful when retrieving XML documents from queries or stored procedures.
Explain OleDbDataAdapter Command Properties with Example?
+
OleDbDataAdapter has properties like SelectCommand, InsertCommand, UpdateCommand, and DeleteCommand. These commands define SQL operations for reading and updating data. Example:, adapter.SelectCommand = new OleDbCommand("SELECT * FROM Students", connection);
Explain the Clear() method of DataSet?
+
Clear() removes all rows from all DataTables within the DataSet. The structure remains intact, but data is deleted. It is useful when reloading fresh data.
Explain the ExecuteScalar method in ADO.NET?
+
ExecuteScalar executes a SQL command and returns a single scalar value. It is commonly used for aggregate queries like COUNT(), MAX(), MIN(), or retrieving a single field. It improves performance as it does not return rows or a dataset. It returns the first column of the first row.
Features of ADO.NET?
+
Disconnected model, XML support, DataReader, DataSet, DataAdapter, object pooling.
Filtering in LINQ?
+
Using Where() to filter elements by a condition.
GetChanges() in DataSet?
+
Returns modified rows (Added, Deleted, Modified) from DataSet for update operations.
GetChanges()?
+
GetChanges() returns a copy of DataSet with only changed rows (Added, Deleted, Modified). Useful for updating only modified records.
Grouping in LINQ?
+
Organizes elements into groups based on a key using GroupBy().
HasChanges() in DataSet?
+
Checks if DataSet has any changes since last load or accept changes.
HasChanges() method of DataSet?
+
HasChanges() checks if the DataSet contains modified, deleted, or new rows. It returns true if changes exist, helping detect update needs.
IDisposable?
+
Interface for releasing unmanaged resources manually via Dispose().
Immediate Execution in LINQ?
+
Using methods like ToList(), Count() forces query execution immediately.
Important Classes in ADO.NET.
+
Key classes include SqlConnection, SqlCommand, SqlDataReader, SqlDataAdapter, DataSet, DataTable, and SqlParameter.
Is it possible to edit data in Repeater control?
+
No, Repeater does not provide built-in editing support like GridView.
Joining in LINQ?
+
Combines collections/tables based on key with Join() or GroupJoin().
Keyword to accept variable parameters
+
The keyword params is used to accept a variable number of arguments in C#.
Layers of ADO.NET
+
The two layers are Connected Layer (Connection, Command, DataReader) and Disconnected Layer (DataSet, DataTable, DataAdapter).
Lazy vs eager loading in EF?
+
Lazy: loads related entities on demand, Eager: loads with query using Include()
LINQ deferred execution?
+
Query runs only when enumerated (foreach, ToList()).
LINQ?
+
LINQ (Language Integrated Query) allows querying data using C# syntax across objects, SQL, XML, and Entity Framework.
LINQ?
+
LINQ (Language Integrated Query) allows querying objects, collections, databases, and XML using C# language syntax.
Main components of ADO.NET?
+
Connection, Command, DataReader, DataSet, DataAdapter, DataTable, and DataView.
Method in OleDbAdapter to populate dataset
+
The method is Fill(), used to load records into DataSet/DataTable.
Method in OleDbDataAdapter populates a dataset with records?
+
The Fill() method of OleDbDataAdapter populates a DataSet or DataTable with data. It executes the SELECT command and loads the returned rows into the dataset for disconnected use.
Method to execute SQL returning single value
+
The method is ExecuteScalar(), which returns the first column of the first row.
Method used to read XML daily
+
The Read() or Load() methods using XmlReader or XDocument are used to process XML files.
Method used to sort data
+
Sorting can be done using DataView.Sort property.
Methods of DataSet.
+
Common methods include AcceptChanges(), RejectChanges(), ReadXml(), WriteXml(), and GetChanges() for data manipulation and synchronization.
Methods of XML DataSet Object
+
Common methods include ReadXml(), WriteXml(), ReadXmlSchema(), and WriteXmlSchema(), which allow reading and writing XML data and schema.
Methods under SqlCommand
+
Common methods include ExecuteReader(), ExecuteScalar(), ExecuteNonQuery(), ExecuteXmlReader(), Cancel(), Prepare() and ExecuteAsync() for asynchronous calls.
Namespaces for Data Access.
+
Common namespaces:, · System.Data, · System.Data.SqlClient, · System.Data.OleDb
Namespaces used in ADO.NET?
+
Common namespaces:, · System.Data, · System.Data.SqlClient, · System.Data.OleDb
Navigation property in EF?
+
Represents relationships and allows traversing related entities easily.
Object Pooling?
+
A technique to reuse created objects instead of recreating new ones, improving performance.
object pooling?
+
Reusing instantiated objects to reduce overhead and improve performance.
Object used to add relationship
+
DataRelation object is used to create relationships between DataTables.
Optimistic concurrency in ADO.NET?
+
Optimistic concurrency allows multiple users to access data and checks for conflicts only when updating.
OrderBy/ThenBy in LINQ?
+
Sorts collection first by OrderBy, then further sorting with ThenBy.
Parameterized query in ADO.NET?
+
A parameterized query uses parameters to prevent SQL injection and pass values safely.
Parameterized query?
+
Prevents SQL injection and allows passing parameters safely in SqlCommand.
Parameters in ADO.NET?
+
Parameters are used in parameterized queries or stored procedures to prevent SQL injection and pass values securely.
Pessimistic concurrency in ADO.NET?
+
Pessimistic concurrency locks data while a user is editing to prevent conflicts.
Preferred method for executing SQL with parameters?
+
Use Parameterized queries with SqlCommand and Parameters collection. This prevents SQL injection and handles data safely.
Projection in LINQ?
+
Selecting specific columns or transforming data with Select().
Properties and Methods of Command Object.
+
Properties: CommandText, Connection, CommandType., Methods: ExecuteReader(), ExecuteScalar(), ExecuteNonQuery().
Provider used for MS Access, Oracle, etc.
+
The OleDb provider is used to connect to multiple heterogeneous databases like MS Access, Excel, and Oracle.
RowVersion in ADO.NET?
+
RowVersion represents the state of a DataRow (Original, Current, Proposed) for concurrency control.
SqlCommand Object?
+
The SqlCommand object executes SQL queries and stored procedures against a SQL Server database. It supports methods like ExecuteReader(), ExecuteScalar(), and ExecuteNonQuery().
SqlCommand?
+
Executes SQL queries, commands, and stored procedures on a database.
SqlCommandBuilder?
+
SqlCommandBuilder auto-generates Insert, Update, and Delete commands for a DataAdapter based on a select query. It reduces manual SQL writing.
SqlTransaction in ADO.NET?
+
SqlTransaction allows executing multiple commands as a single transaction with commit or rollback.
SqlTransaction?
+
SqlTransaction ensures multiple operations execute as a single unit. If any operation fails, the entire transaction can be rolled back.
Stop a running thread?
+
Threads can be stopped using Thread.Abort(), CancellationToken, or cooperative flag-based termination (recommended).
Strongly typed DataSet?
+
Strongly typed DataSet has a predefined schema and provides compile-time checking of tables and columns.
System.Data Namespace Class.
+
System.Data namespace provides classes for working with relational data. It includes DataTable, DataSet, DataRelation, DataColumn, and connection-related classes.
TableMapping in ADO.NET?
+
TableMapping maps source table names from a DataAdapter to destination DataSet table names.
Transaction in ADO.NET?
+
A transaction is a set of operations executed as a single unit, ensuring ACID properties.
Transactions and Concurrency in ADO.NET?
+
Transactions ensure multiple database operations execute as a unit (commit/rollback). Concurrency manages simultaneous access using locking or optimistic/pessimistic control.
Transactions in ADO.NET?
+
Ensures a set of operations execute as a unit; rollback occurs on failure.
Two Fundamental Objects in ADO.NET.
+
· Connection Object, · Command Object
Two important ADO.NET objects?
+
DataReader for connected model and DataSet for disconnected model.
Typed vs. Untyped Dataset
+
Typed DataSet has predefined schema with IntelliSense support. Untyped DataSet does not have fixed schema and works with dynamic tables.
Use of connection object?
+
Creates a link to the database and opens/closes transactions and commands.
Use of DataSet Object.
+
A DataSet stores multiple tables in memory, supports XML formatting, relational mapping, and offline work. Changes can later be synchronized with the database via DataAdapter.
Use of DataView
+
DataView provides a filtered, sorted view of a DataTable without modifying actual data. It supports searching, sorting, and binding to UI controls.
Use of SqlCommand object?
+
Executes SQL statements: SELECT, INSERT, UPDATE, DELETE, stored procedures.
Uses of Stored Procedure
+
Stored procedures enhance performance, security, reusability, and reduce traffic by executing on the server.
Which object needs to be closed?
+
Objects like Connection, DataReader, and XmlReader must be closed to release resources.
XML support in ADO.NET?
+
ADO.NET can read, write, and manipulate XML using DataSet, DataTable, and XML methods like ReadXml and WriteXml.

Microservices Architecture (.NET + Azure)

+

End-to-end enterprise-grade architecture with real production patterns

Complete, end-to-end view of an Advanced Microservices Architecture using .NET & Azure, covering design development deployment operations, along with tools used at each stage.

  1. How does Docker Work?
    +

    Docker’s architecture is built around three main components that work together to build, distribute, and run containers.

    1 - Docker Client

    This is the interface through which users interact with Docker. It sends commands (such as build, pull, run, push) to the Docker Daemon using the Docker API.

    2 - Docker Host

    This is where the Docker Daemon runs. It manages images, containers, networks, and volumes, and is responsible for building and running applications.

    3 - Docker Registry

    The storage system for Docker images. Public registries like Docker Hub or private registries allow pulling and pushing images.

  2. How CQRS Works?
    +

    CQRS (Command Query Responsibility Segregation) separates write (Command) and read (Query) operations for better scalability and maintainability.

    1 - The client sends a command to update the system state. A Command Handler validates and executes logic using the Domain Model.

    2 - Changes are saved in the Write Database and can also be saved to an Event Store. Events are emitted to update the Read Model asynchronously.

    3 - The projections are stored in the Read Database. This database is eventually consistent with the Write Database.

    4 - On the query side, the client sends a query to retrieve data.

    5 - A Query Handler fetches data from the Read Database, which contains precomputed projections.

    6 - Results are returned to the client without hitting the write model or the write database.

  3. Containerization Explained: From Build to Runtime
    +

    “Build once, run anywhere.” That’s the promise of containerization, and here’s how it actually works:

    Build Flow: Everything starts with a Dockerfile, which defines how your app should be built. When you run docker build, it creates a Docker Image containing:

    - Your code

    - The required dependencies

    - Necessary libraries

    This image is portable. You can move it across environments, and it’ll behave the same way, whether on your local machine, a CI server, or in the cloud.

    Runtime Architecture: When you run the image, it becomes a Container, an isolated environment that executes the application. Multiple containers can run on the same host, each with its own filesystem, process space, and network stack.

    The Container Engine (like Docker, containerd, CRI-O, or Podman) manages:

    - The container lifecycle

    - Networking and isolation

    - Resource allocation

    All containers share the Host OS kernel, sitting on top of the hardware. That’s how containerization achieves both consistency and efficiency, light like processes, but isolated like VMs.

    Cloud Load Balancer Cheat Sheet

    Efficient load balancing is vital for optimizing the performance and availability of your applications in the cloud.

    However, managing load balancers can be overwhelming, given the various types and configuration options available.

    In today's multi-cloud landscape, mastering load balancing is essential to ensure seamless user experiences and maximize resource utilization, especially when orchestrating applications across multiple cloud providers. Having the right knowledge is key to overcoming these challenges and achieving consistent, reliable application delivery.

    In selecting the appropriate load balancer type, it's essential to consider factors such as application traffic patterns, scalability requirements, and security considerations. By carefully evaluating your specific use case, you can make informed decisions that enhance your cloud infrastructure's efficiency and reliability.

    This Cloud Load Balancer cheat sheet would help you in simplifying the decision-making process and helping you implement the most effective load balancing strategy for your cloud-based applications.

  4. System Performance Metrics Every Engineer Should Know
    +

    Your API is slow. But how slow, exactly? You need numbers. Real metrics that tell you what's actually broken and where to fix it.

    Here are the four core metrics every engineer should know when analyzing system performance:

    - Queries Per Second (QPS): How many incoming requests your system handles per second. Your server gets 1,000 requests in one second? That's 1,000 QPS. Sounds straightforward until you realize most systems can't sustain their peak QPS for long without things starting to break.

    - Transactions Per Second (TPS): How many completed transactions your system processes per second. A transaction includes the full round trip, i.e., the request goes out, hits the database, and comes back with a response.

    TPS tells you about actual work completed, not just requests received. This is what your business cares about.

    - Concurrency: How many simultaneous active requests your system is handling at any given moment. You could have 100 requests per second, but if each takes 5 seconds to complete, you're actually handling 500 concurrent requests at once.

    High concurrency means you need more resources, better connection pooling, and smarter thread management.

    - Response Time (RT): The elapsed time from when a request starts until the response is received. Measured at both the client level and server level.

    A simple relationship ties them all together: QPS = Concurrency ÷ Average Response Time

    More concurrency or lower response time = higher throughput.

  5. Database Types You Should
    +

    There’s no such thing as a one-size-fits-all database anymore. Modern applications rely on multiple database types, from real-time analytics to vector search for AI. Knowing which type to use can make or break your system’s performance.

    Relational: Traditional row-and-column databases, great for structured data and transactions.

    Columnar: Optimized for analytics, storing data by columns for fast aggregations.

    Key-Value: Stores data as simple key–value pairs, enabling fast lookups.

    In-memory: Stores data in RAM for ultra-low latency lookups, ideal for caching or session management.

    Wide-Column: Handles massive amounts of semi-structured data across distributed nodes.

    Time-series: Specialized for metrics, logs, and sensor data with time as a primary dimension.

    Immutable Ledger: Ensures tamper-proof, cryptographically verifiable transaction logs.

    Graph: Models complex relationships, perfect for social networks and fraud detection

    Document: Flexible JSON-like storage, great for modern apps with evolving schemas.

    Geospatial: Manages location-aware data such as maps, routes, and spatial queries.

    Text-search: Full-text indexing and search with ranking, filters, and analytics.

    Blob: Stores unstructured objects like images, videos, and files.

    Vector: Powers AI/ML apps by enabling similarity search across embeddings.

  6. Top 20 System Design Concepts
    +

    1.Load Balancing: Distributes traffic across multiple servers for reliability and availability.

    2. Caching: Stores frequently accessed data in memory for faster access.

    3. Database Sharding: Splits databases to handle large-scale data growth.

    4. Replication: Copies data across replicas for availability and fault tolerance.

    5. CAP Theorem: Trade-off between consistency, availability, and partition tolerance.

    6. Consistent Hashing: Distributes load evenly in dynamic server environments.

    7. Message Queues: Decouples services using asynchronous event-driven architecture.

    8. Rate Limiting: Controls request frequency to prevent system overload.

    9. API Gateway: Centralized entry point for routing API requests.

    10. Microservices: Breaks systems into independent, loosely coupled services.

    11. Service Discovery: Locates services dynamically in distributed systems.

    12. CDN: Delivers content from edge servers for speed.

    13. Database Indexing: Speeds up queries by indexing important fields.

    14. Data Partitioning: Divides data across nodes for scalability and performance.

    15. Eventual Consistency: Guarantees consistency over time in distributed databases

    16. WebSockets: Enables bi-directional communication for live updates.

    17. Scalability: Increases capacity by upgrading or adding machines.

    18. Fault Tolerance: Ensures system availability during hardware/software failures.

    19. Monitoring: Tracks metrics and logs to understand system health.

    20. Authentication & Authorization: Controls user access and verifies identity securely.

  7. 5 REST API Authentication Methods
    +

    Basic Authentication: Clients include a Base64-encoded username and password in every request header, which is simple but insecure since credentials are transmitted in plaintext. Useful in quick prototypes or internal services over secure networks.

    2. Session Authentication: After login, the server creates a session record and issues a cookie. Subsequent requests send that cookie so the server can validate user state. Used in traditional web-apps.

    3. Token Authentication: Clients authenticate once to receive a signed token, then present the token on each request for stateless authentication. Used in single-page applications and modern APIs that require scalable, stateless authentication.

    4. OAuth-Based Authentication: Clients obtain an access token via an authorization grant from an OAuth provider, then use that token to call resource servers on the user’s behalf. Used in cases of third-party integrations or apps that need delegated access to user data.

    5. API Key Authentication: Clients present a predefined key (often in headers or query strings) with each request. The server verifies the key to authorize access. Used in service-to-service or machine-to-machine APIs where simple credential checks are sufficient.

  8. Virtualization vs. Containerization
    +

    Before containers simplified deployment, virtualization changed how we used hardware. Both isolate workloads, but they do it differently.

    - Virtualization (Hardware-level isolation): Each virtual machine runs a complete operating system, Windows, Fedora, or Ubuntu, with its own kernel, drivers, and libraries. The hypervisor (VMware ESXi, Hyper-V, KVM) sits directly on hardware and emulates physical machines for each guest OS.

    This makes VMs heavy but isolated. Need Windows and Linux on the same box? VMs handle it easily. Startup time for a typical VM is in minutes because you're booting an entire operating system from scratch.

    - Containerization (OS-level isolation): Containers share the host operating system's kernel. No separate OS per container. Just isolated processes with their own filesystem and dependencies.

    The container engine (Docker, containerd, CRI-O, Podman) manages lifecycle, networking, and isolation, but it all runs on top of a single shared kernel. Lightweight and fast. Containers start in milliseconds because you're not booting an OS, just launching a process.

    But here's the catch: all containers on a host must be compatible with that host's kernel. Can't run Windows containers on a Linux host (without nested virtualization tricks).

  9. Types of Virtualization
    +

    Virtualization didn’t just make servers efficient, it changed how we build, scale, and deploy everything. Here’s a quick breakdown of the four major types of virtualization you’ll find in modern systems:

    1. Traditional (Bare Metal): Applications run directly on the operating system. No virtualization layer, no isolation between processes. All applications share the same OS kernel, libraries, and resources.

    2. Virtualized (VM-based): Each VM runs its own complete operating system. The hypervisor sits on physical hardware and emulates entire machines for each guest OS. Each VM thinks it has dedicated hardware even though it's sharing the same physical server.

    3. Containerized: Containers share the host operating system's kernel but get isolated runtime environments. Each container has its own filesystem, but they're all using the same underlying OS. The container engine (Docker, containerd, Podman) manages lifecycle, networking, and isolation without needing separate operating systems for each application.

    Lightweight and fast. Containers start in milliseconds because you're not booting an OS. Resource usage is dramatically lower than VMs.

    4. Containers on VMs: This is what actually runs in production cloud environments. Containers inside VMs, getting benefits from both. Each VM runs its own guest OS with a container engine inside. The hypervisor provides hardware-level isolation between VMs. The container engine provides lightweight application isolation within VMs.

    This is the architecture behind Kubernetes clusters on AWS, Azure, and GCP. Your pods are containers, but they're running inside VMs you never directly see or manage.

  10. Git Merge vs. Rebase vs. Squash Commit!
    +

    What are the differences?

    When we 𝐦𝐞𝐫𝐠𝐞 𝐜𝐡𝐚𝐧𝐠𝐞𝐬 from one Git branch to another, we can use ‘git merge’ or ‘git rebase’. The diagram below shows how the two commands work.

    𝐆𝐢𝐭 𝐌𝐞𝐫𝐠𝐞

    This creates a new commit G’ in the main branch. G’ ties the histories of both main and feature branches.

    Git merge is 𝐧𝐨𝐧-𝐝𝐞𝐬𝐭𝐫𝐮𝐜𝐭𝐢𝐯𝐞. Neither the main nor the feature branch is changed.

    𝐆𝐢𝐭 𝐑𝐞𝐛𝐚𝐬𝐞

    Git rebase moves the feature branch histories to the head of the main branch. It creates new commits E’, F’, and G’ for each commit in the feature branch.

    The benefit of rebase is that it has 𝐥𝐢𝐧𝐞𝐚𝐫 𝐜𝐨𝐦𝐦𝐢𝐭 𝐡𝐢𝐬𝐭𝐨𝐫𝐲.

    Rebase can be dangerous if “the golden rule of git rebase” is not followed.

    𝐓𝐡𝐞 𝐆𝐨𝐥𝐝𝐞𝐧 𝐑𝐮𝐥𝐞 𝐨𝐟 𝐆𝐢𝐭 𝐑𝐞𝐛𝐚𝐬𝐞

    Never use it on public branches!

  11. Popular Backend Tech Stack.
    +
  12. The AI Agent Tech Stack
    +
    1. Foundation Models: Large-scale pre-trained language models that serve as the “brains” of AI agents, enabling capabilities like reasoning, text generation, coding, and question answering.

      2. Data Storage: This layer handles vector databases and memory storage systems used by AI agents to store and retrieve context, embeddings, or documents.

      3. Agent Development Frameworks: These frameworks help developers build, orchestrate, and manage multi-step AI agents and their workflows.

      4. Observability: This category enables monitoring, debugging, and logging of AI agent behavior and performance in real-time.

      5. Tool Execution: These platforms allow AI agents to interface with real-world tools (for example, APIs, browsers, external systems) to complete complex tasks.

      6. Memory Management: These systems manage long-term and short-term memory for agents, helping them retain useful context and learn from past interactions.

  13. How to Design Good APIs
    +

    A well-designed API feels invisible, it just works. Behind that simplicity lies a set of consistent design principles that make APIs predictable, secure, and scalable.

    Here's what separates good APIs from terrible ones:

    - Idempotency: GET, HEAD, PUT, and DELETE should be idempotent. Send the same request twice, get the same result. No unintended side effects. POST and PATCH are not idempotent. Each call creates a new resource or modifies the state differently.

    Use idempotency keys stored in Redis or your database. Client sends the same key with retries, server recognizes it and returns the original response instead of processing again.

    - Versioning

    - Noun-based resource names: Resources should be nouns, not verbs. “/api/products”, not “/api/getProducts”.

    - Security: Secure every endpoint with proper authentication. Bearer tokens (like JWTs) include a header, payload, and signature to validate requests. Always use HTTPS and verify tokens on every call.

    - Pagination: When returning large datasets, use pagination parameters like “?limit=10&offset=20” to keep responses efficient and consistent.

  14. Big Data Pipeline Cheatsheet for AWS, Azure, and Google Cloud
    +

    Each platform offers a comprehensive suite of services that cover the entire lifecycle:

    1 - Ingestion: Collecting data from various sources

    2 - Data Lake: Storing raw data

    3 - Computation: Processing and analyzing data

    4 - Data Warehouse: Storing structured data

    5 - Presentation: Visualizing and reporting insights

    AWS uses services like Kinesis for data streaming, S3 for storage, EMR for processing, RedShift for warehousing, and QuickSight for visualization.

    Azure’s pipeline includes Event Hubs for ingestion, Data Lake Store for storage, Databricks for processing, Cosmos DB for warehousing, and Power BI for presentation.

    GCP offers PubSub for data streaming, Cloud Storage for data lakes, DataProc and DataFlow for processing, BigQuery for warehousing, and Data Studio for visualization.

  15. Top 5 common ways to improve API performance.
    +

    Result Pagination:

    This method is used to optimize large result sets by streaming them back to the client, enhancing service responsiveness and user experience.

    Asynchronous Logging:

    This approach involves sending logs to a lock-free buffer and returning immediately, rather than dealing with the disk on every call. Logs are periodically flushed to the disk, significantly reducing I/O overhead.

    Data Caching:

    Frequently accessed data can be stored in a cache to speed up retrieval. Clients check the cache before querying the database, with data storage solutions like Redis offering faster access due to in-memory storage.

    Payload Compression:

    To reduce data transmission time, requests and responses can be compressed (e.g., using gzip), making the upload and download processes quicker.

    Connection Pooling:

    This technique involves using a pool of open connections to manage database interaction, which reduces the overhead associated with opening and closing connections each time data needs to be loaded. The pool manages the lifecycle of connections for efficient resource use.

  16. Explaining 9 types of API testing.
    +

    This is done after API development is complete. Simply validate if the APIs are working and nothing breaks.

    🔹 Functional Testing

    This creates a test plan based on the functional requirements and compares the results with the expected results.

    🔹 Integration Testing

    This test combines several API calls to perform end-to-end tests. The intra-service communications and data transmissions are tested.

    🔹 Regression Testing

    This test ensures that bug fixes or new features shouldn’t break the existing behaviors of APIs.

    🔹 Load Testing

    This tests applications’ performance by simulating different loads. Then we can calculate the capacity of the application.

    🔹 Stress Testing

    We deliberately create high loads to the APIs and test if the APIs are able to function normally.

    🔹 Security Testing

    This tests the APIs against all possible external threats.

    🔹 UI Testing

    This tests the UI interactions with the APIs to make sure the data can be displayed properly.

    🔹 Fuzz Testing

    This injects invalid or unexpected input data into the API and tries to crash the API. In this way, it identifies the API vulnerabilities.

  17. 10 Key Data Structures We Use Every Day
    +

    - list: keep your Twitter feeds

    - stack: support undo/redo of the word editor

    - queue: keep printer jobs, or send user actions in-game

    - hash table: cashing systems

    - Array: math operations

    - heap: task scheduling

    - tree: keep the HTML document, or for AI decision

    - suffix tree: for searching string in a document

    - graph: for tracking friendship, or path finding

    - r-tree: for finding the nearest neighbor

    - vertex buffer: for sending data to GPU for rendering

  18. How to learn payment systems?
    +
  19. How to Debug a Slow API?
    +

    Your API is slow. Users are complaining. And you have no idea where to start looking. Here is the systematic approach to track down what is killing your API.

    Start with the network: High latency? Throw a CDN in front of your static assets. Large payloads? Compress your responses. These are quick wins that don't require touching code.

    Check your backend code next: This is where most slowdowns hide. CPU-heavy operations should run in the background. Complicated business logic that needs simplification. Blocking synchronous calls that should be async. Profile it, find the hot paths, fix them.

    Check the database: Missing indexes are the classic culprit. Also watch for N+1 queries, where you are hammering the database hundreds of times when one batch query would do.

    Don't forget external APIs: That Stripe call, that Google Maps request, they are outside your control. Make parallel calls where you can. Set aggressive timeouts and retries so one slow third-party doesn't tank your whole response.

    Finally, check your infrastructure: Maxed-out servers need auto-scaling. Connection pool limits need tuning. Sometimes the problem isn't your code at all, it’s that you are trying to serve 10,000 requests with resources built for 100.

    The key is being methodical. Don't just throw solutions at the wall. Measure first, identify the actual bottleneck, then fix it.

  20. 1️⃣High-Level Microservices Architecture (Azure + .NET)
    +

    4 Core Principles

    • Loosely coupled services
    • Independent deployments
    • Database per service
    • Event-driven communication
    • Automated CI/CD
    • Observability & resilience built-in
  21. 2️⃣Architecture Layers & Responsibilities
    +

    🔹Client Layer

    • Web (Angular/React)
    • Mobile Apps
    • External Consumers

    🔹API Gateway Layer

    • Single entry point
    • Security, throttling, routing
    • Versioning & transformation

    🔹Microservices Layer

    • Independent .NET services
    • Own database & lifecycle
    • REST + Async Messaging

    🔹Data Layer

    • Polyglot persistence
    • No shared databases

    🔹Infrastructure Layer

    • Containers, networking, security
    • Auto-scaling & high availability
  22. 3️⃣Technology Stack (What is Used & Why)
    +

    🧩Backend (Microservices)

    AreaTool

    FrameworkASP.NET Core (.NET 8)

    API StyleREST + Minimal APIs

    AuthOAuth 2.0 / OpenID Connect

    ValidationFluentValidation

    ORMEntity Framework Core

    Async MessagingAzure Service Bus

    Event StreamingAzure Event Grid

    🌐API Gateway

    ToolPurpose

    Azure API ManagementRouting, auth, throttling

    YARP (Optional)Internal reverse proxy

  23. 📦Containerization & Orchestration
    +

    Tool/Purpose

    Docker

    Package microservices

    Azure Kubernetes Service (AKS)

    Orchestration

    Helm

    Kubernetes deployments

    NGINX Ingress

    Traffic routing

    🗄️Databases (Per Microservice)

    Use CaseAzure Service

    RelationalAzure SQL / PostgreSQL

    NoSQLCosmos DB

    CacheAzure Redis Cache

    SearchAzure Cognitive Search

  24. 4️⃣Communication Patterns<< /span>
    +

    🔁Synchronous

    • REST (HTTP)
    • gRPC (internal, high-performance)

    🔔Asynchronous (Recommended)

    • Azure Service Bus (queues/topics)
    • Event Grid for domain events
    • Enables loose coupling & scalability
  25. 5️⃣Security Architecture (Enterprise-Grade)
    +

    Security Layers

    • Azure AD / Entra ID– Identity provider
    • OAuth 2.0 + OpenID Connect
    • JWT validation at API Gateway
    • Azure Key Vault– secrets & certificates
    • Managed Identity– no secrets in code
  26. 6️⃣CI/CD Pipeline (End-to-End Automation)
    +

    Pipeline Flow

    1. Code Commit (Git)
    2. Build & Unit Tests
    3. Docker Image Build
    4. Push to Azure Container Registry
    5. Deploy to AKS using Helm
    6. Smoke & Integration Tests

    Tools

    • Azure DevOps / GitHub Actions
    • Docker
    • Helm
    • SonarQube (code quality)
  27. 7️⃣Observability & Reliability
    +
    Monitoring Stack

    Tool

    Purpose

    Azure Monitor

    Infra metrics

    Application Insights

    Logs & traces

    OpenTelemetry

    Distributed tracing

    Log Analytics

    Centralized logs

    Resilience Patterns

    • Circuit Breaker (Polly)
    • Retry with backoff
    • Timeouts
    • Bulkheads
    • Health Checks
  28. 8️⃣Infrastructure as Code (IaC)
    +

    What Is Automated

    • AKS
    • API Management
    • Networking (VNet, Subnets)
    • Azure SQL / Cosmos DB
    • Key Vault
    • Monitoring

    Benefits

    • Reproducible environments
    • Easy rollbacks
    • Dev / QA / Prod consistency
  29. 9️⃣Complete End-to-End Flow (Simplified)
    +
    1. Client API Gateway
    2. API Gateway Auth (Azure AD)
    3. Gateway routes to Microservice
    4. Service processes request
    5. Publishes event to Service Bus
    6. Other services react asynchronously
    7. Logs & metrics collected centrally
    8. CI/CD deploys changes independently

    Clear, enterprise-ready explanationof each requested topic, with visual diagrams, practical examples, and real-world guidancefor .NET + Azure microservices.

  30. 1️⃣Real-World Reference Architecture (Enterprise Scale)
    +

    🔹Architecture Overview

    This is the most commonly used production architecturein large organizations.

    🔹Components & Flow

    1. Clients
    • Web (Angular/React)
    • Mobile Apps
    • External APIs
    1. API Gateway (Azure API Management)
    • Authentication & JWT validation
    • Rate limiting & throttling
    • Request routing
    • API versioning
    1. Microservices (.NET)
    • Each service:
    • Own codebase
    • Own database
    • Own CI/CD pipeline
    • Stateless & horizontally scalable
    1. Communication
    • REST/gRPC synchronous
    • Service Bus async events
    1. Data Layer
    • SQL / PostgreSQL per service
    • Cosmos DB for NoSQL
    • Redis for caching
    1. Observability
    • Logs, metrics, traces collected centrall

    ✅Used by banks, fintech, e-commerce, SaaS platforms

  31. 2️⃣Sample .NET Microservice Code (Clean & Production-Ready)
    +

    🔹Folder Structure

    OrderService

    ├── Controllers

    ├── Application

    ├── Domain

    ├── Infrastructure

    ├── Program.cs

    └── appsettings.json

    🔹Minimal API Example (Order Service)

    var builder = WebApplication.CreateBuilder(args);

    builder.Services.AddDbContext<OrderDbContext();

    builder.Services.AddEndpointsApiExplorer();

    builder.Services.AddHealthChecks();

    var app = builder.Build();

    app.MapPost("/orders", async (Order order, OrderDbContext db) =

    {

    db.Orders.Add(order);

    await db.SaveChangesAsync();

    return Results.Created($"/orders/{order.Id}", order);

    });

    app.MapHealthChecks("/health");

    app.Run();

    🔹Async Event Publishing (Azure Service Bus)

    await sender.SendMessageAsync(

    new ServiceBusMessage(JsonSerializer.Serialize(orderCreatedEvent))

    );

    ✔Stateless
    ✔Fast startup
    ✔Cloud-native
    ✔Easy to scale

  32. 3️⃣Terraform + AKS Example (Real Infrastructure as Code)
    +

    🔹What Terraform Creates

    • AKS Cluster
    • Azure Container Registry
    • VNet & Subnets
    • Log Analytics
    • Managed Identity

    🔹Terraform Code (AKS – Simplified)

    resource "azurerm_kubernetes_cluster" "aks" {

    name = "prod-aks"

    location = azurerm_resource_group.rg.location

    resource_group_name = azurerm_resource_group.rg.name

    dns_prefix = "prodaks"

    default_node_pool {

    name = "system"

    node_count = 3

    vm_size = "Standard_DS2_v2"

    }

    identity {

    type = "SystemAssigned"

    }

    }

    🔹Deployment Flow

    Terraform AKS

    CI/CD Docker Image

    Helm Deploy Microservice

    ✔Environment consistency
    ✔Easy rollback
    ✔No manual infra changes

    4️⃣Production Readiness Checklist (Very Important)

    ✅Architecture

    • Database per service
    • Async messaging
    • No shared libraries for business logic

    ✅Security

    • OAuth 2.0 / OpenID Connect
    • Secrets in Key Vault
    • HTTPS everywhere
    • Zero trust networking

    ✅Reliability

    • Health checks
    • Circuit breakers
    • Retry + timeout policies
    • Graceful shutdown

    ✅Observability

    • Centralized logging
    • Distributed tracing
    • Alerts configured
    • Dashboards ready

    ✅DevOps

    • CI/CD per service
    • Blue-Green / Canary deployments
    • Rollback strategy
  33. 5️⃣Microservices Anti-Patterns (❌Avoid These)
    +

    ❌Distributed Monolith

    • Tight coupling
    • Synchronous chains
    • Shared database

    🛑Worst mistake

    ❌Chatty Communication

    • Too many REST calls
    • High latency
    • Cascade failures

    ✔Prefer async events

    ❌Shared Database

    • Schema changes break services
    • No independence

    ✔Database per service

    ❌Over-Engineering Early

    • Too many services
    • Too much infra
    • Low business value

    ✔Start modular evolve

    ❌Ignoring Observability

    • No logs
    • No tracing
    • No metrics

    ✔You can’t fix what you can’t see

    🧠Final Recommendation

    Start with:

    • Modular monolith
    • Clear service boundaries
    • Strong CI/CD & monitoring

    Then evolve to:

    • Event-driven microservices
    • AKS + Terraform
    • Independent deployments

    Hands-on, enterprise-style explanationof all four topics, written the way you’d see them in real GitHub projects and production systems, with architecture visualsto make everything clear.

  34. 📦Complete Sample Project (GitHub-Style)
    +
  35. 🔹Project Structure (Monorepo – Common in Enterprises)
    +

    microservices-platform/

    ├── services/

    │ ├── order-service/

    │ │ ├── src/

    │ │ ├── Dockerfile

    │ │ └── helm/

    │ │

    │ ├── payment-service/

    │ └── inventory-service/

    ├── shared/

    │ ├── contracts/ # Event DTOs only

    ├── infrastructure/

    │ ├── terraform/

    │ │ ├── aks.tf

    │ │ ├── apim.tf

    │ │ └── servicebus.tf

    ├── pipelines/

    │ ├── order-service.yml

    │ └── payment-service.yml

    └── README.md

    🔹Key Design Rules

    ✔Each microservice:

    • Own database
    • Own Dockerfile
    • Own Helm chart
    • Own CI/CD pipeline

    ✔Shared folder:

    • Only contracts/events
    • ❌No shared business logic

    🔹Typical Request Flow

    Client API Gateway Order Service

    Publish Event Service Bus

    Inventory Service

  36. 🧪Testing Strategy for Microservices (Complete Pyramid)
    +

    🔺Testing Pyramid (Recommended)

    1️⃣

    Unit Tests (Most Important)

    • Business logic only
    • No DB, no network
    • Very fast

    ✔Tools:

    • xUnit / NUnit
    • Moq / NSubstitute

    2️⃣Integration Tests

    • API + DB
    • Real infrastructure (TestContainers)

    ✔Examples:

    • Order saved in DB
    • Message sent to Service Bus

    3️⃣Contract Tests (Very Important)

    • Consumer-driven contracts
    • Prevent breaking changes

    ✔Tools:

    • Pact
    • OpenAPI validation

    4️⃣End-to-End Tests (Few)

    • Full system flow
    • Slow but valuable

    ✔Tools:

    • Playwright
    • Postman / Newman

    🔹CI/CD Testing Flow

    Commit

    Unit Tests

    Integration Tests

    Contract Tests

    Deploy

  37. 🚀Zero-Downtime Deployment (AKS + Kubernetes)
    +

    🔹Rolling Deployment (Most Common)

    How It Works

    Old Pod v1 v1 + v2 v2 Only

    ✔Kubernetes ensures:

    • Traffic always available
    • No downtime
    • Automatic rollback on failure

    🔹Kubernetes Configuration (Concept)

    • readinessProbe traffic only to ready pods
    • livenessProbe restart failed pods
    • maxUnavailable = 0
  38. 🔹Blue-Green Deployment (Critical Systems)
    +

    Blue (v1) Live

    Green (v2) Test Switch traffic

    ✔Zero risk
    ✔Instant rollback
    ✔Used in banking & payments

    🔹Canary Deployment (Advanced)

    • Release to 5% users
    • Monitor metrics
    • Gradually increase traffic

    ✔Requires:

    • Metrics
    • Service Mesh or API Gateway
  39. 🧱Service Mesh Explained (Istio / Linkerd)
    +

    🔹What Problem Service Mesh Solves

    Without mesh:

    • Retry logic in every service
    • Security code everywhere
    • Hard to control traffic

    With mesh:
    ✔Infrastructure handles it

    🔹How Service Mesh Works

    Service A Sidecar Sidecar Service B

    Each pod gets a sidecar proxy.

    🔹Capabilities Provided

    Feature

    Benefit

    mTLS

    Zero-trust security

    Retries & Timeouts

    No code changes

    Traffic Splitting

    Canary releases

    Circuit Breakers

    Resilience

    Observability

    Automatic metrics

    🔹Istio vs Linkerd

    Feature

    Istio

    Linkerd

    Complexity

    High

    Low

    Features

    Very rich

    Focused

    Performance

    Slightly heavier

    Very fast

    Learning curve

    Steep

    Easy

    ✔Istio Large enterprises
    ✔Linkerd Simpler, faster adoption

    🧠When to Use Service Mesh

    ✅Many services (20+)
    ✅Canary deployments
    ✅Strict security (mTLS)
    ✅Advanced traffic control

    ❌Avoid for small systems (overkill)

    ✅Final Enterprise Flow (Everything Together)

    GitHub

    CI/CD

    Tests

    Docker

    AKS

    Service Mesh

    Monitoring

    Zero Downtime Releases

    Deep, production-grade explanationof all five topics, exactly how they are implemented in real enterprise .NET + Azure microservices systems, with clear visualsto make each concept intuitive.

  40. 📂Full GitHub Repo with Sample Code (Enterprise-Style)
    +

    🔹Repository Type

    Monorepo(very common in enterprises)

    🔹Why Monorepo?

    ✔Easier governance
    ✔Shared standards
    ✔Centralized CI/CD
    ✔Easier refactoring

    🔹Folder Structure

    microservices-platform/

    ├── services/

    │ ├── order-service/

    │ │ ├── src/

    │ │ ├── tests/

    │ │ ├── Dockerfile

    │ │ └── helm/

    │ ├── payment-service/

    │ └── inventory-service/

    ├── shared/

    │ └── contracts/ # Events only (DTOs)

    ├── infrastructure/

    │ ├── terraform/

    │ └── kubernetes/

    ├── pipelines/

    │ └── azure-devops/

    └── README.md

    🔹Key Rules

    • ❌No shared business logic
    • ✔Shared event contracts only
    • ✔Each service deployable independently
  41. 🧪TestContainers + .NET Demo (Real Integration Testing)
    +

    🔹What Is TestContainers?

    TestContainers spins up real infrastructureduring tests:

    • SQL Server
    • PostgreSQL
    • Redis
    • RabbitMQ / Kafka

    ✔No mocks
    ✔Production-like tests

    🔹How It Works

    Test

    Start Container

    Run API Tests

    Destroy Container

    🔹Example Use Case

    Order Service Integration Test

    • Starts SQL container
    • Runs migrations
    • Calls API
    • Verifies DB state

    🔹Benefits

    ✔Catches real bugs
    ✔CI-friendly
    ✔No shared test DB

  42. 🚦Canary Deployment with Istio (Safe Releases)
    +

    🔹What Is Canary Deployment?

    Release new version to small % of users first.

    90% v1

    10% v2

    🔹How Istio Enables Canary

    Istio uses traffic rules, not code changes.

    🔹Traffic Flow

    Client

    Istio Gateway

    VirtualService

    v1 Pods (90%)

    v2 Pods (10%)

    🔹Canary Benefits

    ✔Zero downtime
    ✔Real user validation
    ✔Instant rollback
    ✔Metrics-driven decisions

    🔹When to Use

    • Financial systems
    • Payment services
    • High-traffic platforms
  43. 🔐End-to-End Security Walkthrough (Zero Trust)
    +

    🔹Security Layers (Outside Inside)

    1️⃣Client Security

    • OAuth 2.0
    • OpenID Connect
    • Access tokens (JWT)

    2️⃣API Gateway

    • Token validation
    • Rate limiting
    • IP filtering

    3️⃣Service-to-Service Security

    • mTLS (via Istio)
    • No plaintext traffic
    • Identity-based access

    4️⃣Secrets Management

    • Managed Identity
    • Key Vault
    • No secrets in config files

    🔹End-to-End Request Flow

    Client

    OAuth Token

    API Gateway

    Service Mesh (mTLS)

    Microservice

    Database

    ✔Zero trust
    ✔Encrypted everywhere
    ✔Auditable

  44. 📊Production Monitoring Dashboards (What Ops Actually See)
    +

    🔹Monitoring Pillars

    📈Metrics

    • CPU / Memory
    • Request rate
    • Error rate
    • Latency (RED metrics)

    📜Logs

    • Centralized logging
    • Correlation IDs
    • Structured logs (JSON)

    🧵Traces

    • Distributed tracing
    • End-to-end request flow
    • Bottleneck identification

    🔹Typical Dashboards

    ✔API response time
    ✔Error % per service
    ✔Pod restarts
    ✔Dependency failures
    ✔SLA / SLO tracking

    🔹Alerting Examples

    • Error rate 2%
    • Latency 500ms
    • Pod crash loop
    • Queue backlog growing

    🧠Final Enterprise Picture (All Together)

    GitHub

    CI/CD

    Tests (Unit + TestContainers)

    Docker

    AKS

    Istio Canary

    Secure mTLS

    Monitoring Dashboards

    Zero Downtime Production

    ✅What You’ve Covered Now

    ✔Real GitHub project structure
    ✔Real integration testing
    ✔Safe production deployments
    ✔Enterprise-grade security
    ✔Production observability

    Deep, production-grade explanationof all five topics with clear visuals, real YAML/code, and enterprise best practicesexactly how they’re used in AKS + .NET microservices.

  45. 🧱Complete Istio YAML (Canary Rules)
    +

    🎯Goal

    Release v2of a service to a small percentage of trafficwithout downtime.

    🔹Architecture Concept

    Client

    Istio Ingress Gateway

    VirtualService (traffic split)

    DestinationRule (v1 / v2)

    🔹DestinationRule (Define Versions)

    apiVersion: networking.istio.io/v1beta1

    kind: DestinationRule

    metadata:

    name: order-service

    spec:

    host: order-service

    subsets:

    - name: v1

    labels:

    version: v1

    - name: v2

    labels:

    version: v2

    🔹VirtualService (Traffic Split)

    apiVersion: networking.istio.io/v1beta1

    kind: VirtualService

    metadata:

    name: order-service

    spec:

    hosts:

    - order-service

    http:

    - route:

    - destination:

    host: order-service

    subset: v1

    weight: 90

    - destination:

    host: order-service

    subset: v2

    weight: 10

    🔹Canary Flow

    ✔90% stable version
    ✔10% new version
    ✔Monitor metrics
    ✔Increase or rollback instantly

  46. 🧪TestContainers – Full .NET Integration Example
    +

    🎯Goal

    Run real infrastructurein tests (no mocks).

    🔹How It Works

    Test Start

    Start SQL Container

    Run Migrations

    Call API

    Verify DB

    Destroy Container

    🔹Example (.NET + SQL Server)

    public class OrderApiTests : IAsyncLifetime

    {

    private readonly MsSqlContainer _db =

    new MsSqlBuilder().Build();

    public async Task InitializeAsync()

    {

    await _db.StartAsync();

    }

    public async Task DisposeAsync()

    {

    await _db.DisposeAsync();

    }

    [Fact]

    public async Task CreateOrder_ShouldPersistData()

    {

    // Arrange

    var client = new HttpClient();

    // Act

    var response = await client.PostAsJsonAsync(

    "/orders", new { ProductId = 1, Quantity = 2 });

    // Assert

    response.EnsureSuccessStatusCode();

    }

    }

    🔹Why TestContainers Matter

    ✔Real DB behavior
    ✔CI/CD safe
    ✔No shared test environments
    ✔Finds production bugs early

  47. 📂Production-Ready GitHub Repo Template
    +

    🔹Repository Structure (Enterprise Standard)

    microservices-platform/

    ├── services/

    │ ├── order-service/

    │ │ ├── src/

    │ │ ├── tests/

    │ │ ├── Dockerfile

    │ │ └── helm/

    ├── shared/

    │ └── contracts/ # Events only

    ├── infrastructure/

    │ ├── terraform/

    │ └── istio/

    ├── pipelines/

    │ └── ci-cd.yml

    └── docs/

    ├── architecture.md

    ├── security.md

    └── runbooks.md

    🔹Mandatory Repo Rules

    ✅Independent deployment
    ❌No shared business logic
    ✅Docs + runbooks
    ✅CI/CD per service

    🔐Security Threat Modeling (Enterprise Reality)
    +

    🎯Goal

    Identify what can go wrong before attackers do.

    🔹STRIDE Threat Model

    Threat

    Example

    Spoofing

    Fake JWT token

    Tampering

    Message manipulation

    Repudiation

    No audit logs

    Information Disclosure

    Plaintext traffic

    Denial of Service

    Traffic floods

    Elevation of Privilege

    Over-permissive roles

    🔹Mitigations

    ✔OAuth2 + JWT
    ✔mTLS between services
    ✔Least-privilege IAM
    ✔Rate limiting
    ✔Audit logs everywhere

    🔹Secure Request Flow

    Client

    OAuth

    API Gateway

    Istio mTLS

    Microservice

    Database

    📊SRE SLIs & SLOs (What Production Really Measures)
    +

    🎯Why SRE Metrics Matter

    You can’t manage what you don’t measure.

    🔹SLIs (Indicators – Raw Metrics)

    SLI

    Example

    Availability

    % successful requests

    Latency

    p95 response time

    Error Rate

    5xx responses

    Throughput

    Requests/sec

    🔹SLOs (Targets)

    Service

    SLO

    Order API availability

    99.9%

    p95 latency

    < 300ms

    Error rate

    < 1%

    🔹Error Budget

    100% − SLO = Error Budget

    If SLO = 99.9%
    ➡Allowed failure = 0.1%

    🔹SRE Decisions Driven by SLOs

    ✔Freeze releases
    ✔Improve reliability
    ✔Scale infrastructure
    ✔Justify tech debt work

    🧠Final End-to-End Picture

    GitHub

    CI/CD

    TestContainers

    AKS

    Istio Canary

    mTLS Security

    SLI/SLO Dashboards

    Zero Downtime Production

    ✅You’ve Now Covered True Enterprise Microservices

    ✔Canary deployments (Istio)
    ✔Real integration testing
    ✔Production repo standards
    ✔Threat modeling
    ✔SRE-grade reliability

    Clear, real-world explanationof each advanced topic, exactly how they are implemented in enterprise .NET + Azure microservices, with architecture visualsto make everything intuitive.

    🔁Disaster Recovery & Multi-Region AKS
    +

    🎯Goal

    Keep your system available even if an entire Azure region fails.

    🔹Common Multi-Region Patterns

    1️⃣Active–Passive (Most Used)

    • Primary region handles traffic
    • Secondary region warm standby
    • Traffic switches only during failure

    Users

    Azure Front Door

    AKS (Primary) ──❌Region Down

    AKS (Secondary) ✅

    ✔Lower cost
    ✔Simple to operate

    2️⃣Active–Active (Advanced)

    • Both regions serve traffic
    • Data replication required

    ✔High availability
    ❌Complex & expensive

    🔹Key DR Components

    • Azure Front Door– global routing & failover
    • Geo-replicated databases
    • Azure Backup
    • Terraform– recreate infra fast

    🔹DR Best Practices

    ✅Stateless services
    ✅Externalized state
    ✅Regular failover drills
    ✅Runbooks documented

    🧪Chaos Engineering (Fault Injection)
    +

    🎯Goal

    Prove your system survives failures before real failures happen.

    🔹What Chaos Tests

    Failure

    Example

    Pod crash

    Kill random pods

    Network latency

    Inject 500ms delay

    Dependency failure

    Break DB connection

    Node failure

    Shutdown VM

    🔹Chaos Experiment Flow

    Normal Traffic

    Inject Failure

    Observe Metrics

    Recover Automatically?

    🔹Tools Commonly Used

    • Chaos Mesh
    • Azure Chaos Studio
    • Kubernetes fault injection

    🔹What You Validate

    ✔Auto-scaling works
    ✔Retries & timeouts correct
    ✔No cascading failures
    ✔Alerts trigger correctly

    📉Cost Optimization for Microservices
    +

    🎯Goal

    Reduce cloud spend without hurting reliability.

    🔹Major Cost Drivers

    • Idle pods
    • Over-provisioned nodes
    • Chatty services
    • Excessive logging

    🔹Cost Optimization Techniques

    🔹AKS

    • Horizontal Pod Autoscaler
    • Cluster Autoscaler
    • Spot node pools (non-prod)

    🔹Application

    • Async messaging
    • Caching (Redis)
    • Reduce log verbosity

    🔹Golden Rule

    Scale with demand, not assumptions

    🔹Real-World Savings

    ✔30–60% cost reduction common
    ✔Faster performance
    ✔Better predictability

    🔄Saga Pattern with Real Workflows
    +

    🎯Problem

    Microservices cannot use distributed transactions.

    🔹What Is Saga Pattern?

    A sequence of local transactionswith compensation on failure.

    🔹Example: Order Workflow

    Create Order

    Reserve Inventory

    Process Payment

    Ship Order

    🔹Failure Scenario

    Payment Fails

    Cancel Inventory

    Cancel Order

    🔹Saga Types

    1️⃣Choreography (Event-Driven)

    • Services react to events
    • No central controller

    ✔Loosely coupled
    ❌Harder to trace

    2️⃣Orchestration

    • Central Saga Controller
    • Explicit workflow

    ✔Clear control
    ✔Easier debugging

    🔹When to Use Saga

    ✅Business workflows
    ✅Event-driven systems
    ❌Simple CRUD apps

    🧠Architecture Decision Records (ADR)
    +

    🎯Goal

    Explain why a decision was made, not just what was built.

    🔹Why ADRs Matter

    • Team changes
    • Long-lived systems
    • Avoid repeating debates
    • Faster onboarding

    🔹ADR Template (Simple & Powerful)

    ADR-001: Use Event-Driven Communication

    Status: Accepted

    Context:

    Synchronous calls caused tight coupling.

    Decision:

    Use async events via messaging.

    Consequences:

    + Better scalability

    + Eventual consistency

    - More complex debugging

    🔹Where ADRs Live

    /docs/adr/

    ├── adr-001-events.md

    ├── adr-002-aks.md

    🔹What to Record

    ✔Architecture choices
    ✔Technology selection
    ✔Trade-offs
    ✔Rejected options

    🧠Final Enterprise View (Everything Together)

    Multi-Region AKS

    Chaos Engineering

    Cost Optimization

    Saga Workflows

    ADR Documentation

    Resilient, Scalable, Auditable Systems

    ✅You’ve Now Reached Principal / Architect Level Topics

    ✔Disaster recovery at scale
    ✔Failure-proof systems
    ✔Cost-efficient cloud design
    ✔Distributed business workflows
    ✔Long-term architectural clarity

    Hands-on, production-grade explanationof each topic with clear visuals, step-by-step flows, and real-world Azure + .NET practices. This is exactly how these are done in mature enterprise platforms.

    🧪Chaos Experiments Walkthrough (Step-by-Step)
    +

    🎯Objective

    Validate that your system remains reliable when things fail(because failures willhappen).

    🔹Step 1: Define Steady State

    Decide what “healthy” means:

    • Error rate < 1%
    • p95 latency < 300 ms
    • No data loss

    📌This is your baseline.

    🔹Step 2: Choose Failure Scenario

    Common chaos experiments:

    • Kill random pods
    • Inject network latency
    • Block database access
    • Simulate node failure

    🔹Step 3: Inject Fault

    Normal Traffic

    Chaos Tool Injects Failure

    System Under Stress

    Example:

    • Kill 30% of Order Service pods

    🔹Step 4: Observe & Measure

    Watch:

    • Auto-scaling
    • Retries & circuit breakers
    • Alert firing
    • User impact

    🔹Step 5: Learn & Improve

    Outcome

    Action

    Slow recovery

    Tune HPA

    Errors spike

    Improve retries

    No alerts

    Fix monitoring

    ✔Chaos is continuous, not one-time

    🔄Saga Pattern Implementation in .NET (Real Example)
    +

    🎯Problem

    Distributed transactions do not workin microservices.

    🔹Business Workflow Example

    E-commerce Order

    Create Order

    Reserve Inventory

    Process Payment

    Ship Order

    🔹Saga Orchestration (Recommended)

    Saga Controller

    ├─ Call Order Service

    ├─ Call Inventory Service

    ├─ Call Payment Service

    └─ Handle Compensation

    🔹.NET Pseudo-Implementation

    public async Task PlaceOrderAsync()

    {

    await orderService.CreateOrder();

    try

    {

    await inventoryService.Reserve();

    await paymentService.Pay();

    }

    catch

    {

    await inventoryService.Release();

    await orderService.Cancel();

    throw;

    }

    }

    🔹Key Characteristics

    ✔Each step is a local transaction
    ✔Failures trigger compensation
    ✔Eventual consistency

    🔹When to Use Saga

    ✅Multi-step business workflows
    ✅Financial transactions
    ❌Simple CRUD services

    📉Azure Cost Breakdown Analysis (Where Money Really Goes)
    +

    🎯Goal

    Understand what you are paying forand why.

    🔹Typical Cost Distribution

    Component

    % Cost

    AKS Nodes

    45–60%

    Databases

    20–30%

    Networking

    5–10%

    Logs & Monitoring

    5–15%

    🔹Hidden Cost Traps

    ❌Over-sized node pools
    ❌Always-on non-prod clusters
    ❌Excessive logs
    ❌Chatty microservices

    🔹Optimization Playbook

    AKS

    • Right-size node pools
    • Use autoscaling
    • Spot nodes for non-prod

    Application

    • Async messaging
    • Caching hot paths
    • Reduce log verbosity

    🔹Cost Optimization Outcome

    ✔30–50% savings typical
    ✔Better performance
    ✔Predictable bills

    🔐Security Audits & Compliance (Enterprise Reality)
    +

    🎯Goal

    Ensure system meets security & regulatory requirements.

    🔹What a Security Audit Covers

    Infrastructure

    • Network isolation
    • Public exposure
    • Firewall rules

    Identity & Access

    • Least privilege
    • Role separation
    • Token lifetimes

    Application

    • OWASP Top 10
    • Input validation
    • Secrets handling

    🔹Compliance Examples

    Standard

    Focus

    ISO 27001

    Information security

    SOC 2

    Controls & auditing

    PCI DSS

    Payment systems

    GDPR

    Data privacy

    🔹Audit Flow

    Architecture Review

    Threat Modeling

    Control Verification

    Gap Analysis

    Remediation

    Re-Audit

    🔹Common Audit Findings

    ❌Secrets in config files
    ❌No mTLS internally
    ❌Over-privileged identities
    ❌Missing audit logs

    ✔All fixable with proper design

    🧠Big Picture (How All This Fits Together)

    Chaos Testing

    Saga Workflows

    Cost Controls

    Security Audits

    Stable, Secure, Cost-Efficient Platform

    ✅You Are Now at Staff / Principal Architect Level

    ✔You can design failure-proof systems
    ✔You can handle distributed transactions
    ✔You understand cloud economics
    ✔You can pass security audits

    Capstone-level, production-ready explanationof all four topics, exactly how they appear in real enterprise .NET + Azure microservices systems, with visuals + concrete artifactsyou can directly adapt.

    📂Complete GitHub Repo (Ready to Clone – Enterprise Standard)
    +

    🎯What “Ready to Clone” Means

    ✔Builds locally
    ✔Runs in AKS
    ✔CI/CD included
    ✔IaC included
    ✔Docs & runbooks included

    🔹Repository Structure (Monorepo – Recommended)

    microservices-platform/

    ├── services/

    │ ├── order-service/

    │ │ ├── src/

    │ │ ├── tests/

    │ │ ├── Dockerfile

    │ │ └── helm/

    │ ├── payment-service/

    │ └── inventory-service/

    ├── shared/

    │ └── contracts/ # Events only (DTOs)

    ├── infrastructure/

    │ ├── terraform/ # AKS, ACR, DB, Key Vault

    │ ├── istio/ # Canary, mTLS rules

    ├── chaos/

    │ └── experiments/ # Chaos YAML files

    ├── pipelines/

    │ └── ci-cd.yml

    ├── docs/

    │ ├── architecture.md

    │ ├── adr/

    │ ├── runbooks.md

    └── README.md

    🔹Hard Rules (Enterprise)

    • ❌No shared business logic
    • ✔Each service deploys independently
    • ✔Infra fully reproducible
    • ✔Docs are mandatory
    🧪Chaos Experiment Scripts (Real Kubernetes Faults)
    +

    🎯Purpose

    Proactively break the system to prove it recovers automatically.

    🔹Common Chaos Experiments

    1️⃣Pod Kill Experiment

    apiVersion: chaos-mesh.org/v1alpha1

    kind: PodChaos

    metadata:

    name: kill-order-pods

    spec:

    action: pod-kill

    mode: fixed

    value: "2"

    selector:

    labelSelectors:

    app: order-service

    duration: "60s"

    ✔Tests:

    • Auto-healing
    • Readiness probes
    • Load balancing

    2️⃣Network Latency Injection

    apiVersion: chaos-mesh.org/v1alpha1

    kind: NetworkChaos

    metadata:

    name: payment-latency

    spec:

    action: delay

    delay:

    latency: "500ms"

    selector:

    labelSelectors:

    app: payment-service

    ✔Tests:

    • Retry policies
    • Circuit breakers
    • Timeouts

    🔹Chaos Execution Cycle

    Baseline

    Inject Failure

    Observe Metrics

    Auto-Recovery

    Improve Weakness

    🔄Saga Implementation with Messaging (Production-Grade)
    +

    🎯Problem

    No distributed transactions across microservices.

    🔹Business Flow (Order Saga)

    OrderCreated

    InventoryReserved

    PaymentProcessed

    OrderCompleted

    🔹Failure & Compensation

    PaymentFailed

    InventoryReleased

    OrderCancelled

    🔹Event-Driven Saga (Choreography)

    🔹Events

    • OrderCreated
    • InventoryReserved
    • PaymentFailed
    • OrderCancelled

    🔹.NET Event Publisher Example

    await serviceBusSender.SendMessageAsync(

    new ServiceBusMessage(JsonSerializer.Serialize(

    new OrderCreated(orderId)))

    );

    🔹Inventory Service Reaction

    if (message.Type == "OrderCreated")

    {

    ReserveInventory();

    Publish(new InventoryReserved(orderId));

    }

    🔹Why Messaging-Based Saga?

    ✔Loose coupling
    ✔No central bottleneck
    ✔Scales independently
    ✔Natural retry handling

    📊SRE Dashboards with Real Metrics (What Ops Actually Watch)
    +

    🎯Goal

    Measure reliability, not just uptime.

    🔹Core SRE Metrics (RED + USE)

    🔹RED (Services)

    Metric

    Meaning

    Rate

    Requests/sec

    Errors

    5xx %

    Duration

    p95 latency

    🔹USE (Infrastructure)

    Metric

    Meaning

    Utilization

    CPU / Memory

    Saturation

    Queue depth

    Errors

    Pod restarts

    🔹Example SLOs

    Service

    SLO

    Order API Availability

    99.9%

    p95 Latency

    < 300 ms

    Error Rate

    < 1%

    🔹Dashboard Sections

    ✔Service health
    ✔Dependency latency
    ✔Error budgets
    ✔Pod restarts
    ✔Message queue depth

    🔹Alert Examples

    • Error rate 2% for 5 mins
    • p95 latency 500 ms
    • Queue backlog growing
    • Pod crash loop detected

    🧠Final End-to-End Enterprise Picture

    Clone Repo

    CI/CD

    TestContainers

    Chaos Experiments

    Event-Driven Saga

    Istio Canary

    SRE Dashboards

    Stable Production

    ✅You’ve Reached End-to-End Microservices Mastery

    ✔Production-ready repo structure
    ✔Real chaos scripts
    ✔Messaging-based saga workflows
    ✔SRE-grade observability

Agile methodology

+
12 principles of agile?
+
Principles include customer satisfaction welcoming change frequent delivery collaboration motivated individuals working software as measure of progress sustainable development technical excellence simplicity self-organizing teams reflection and continuous improvement.
Acceptance criteria?
+
Acceptance criteria define the conditions a user story must meet to be considered complete.
Acceptance testing?
+
Acceptance testing verifies that software meets business requirements and user expectations.
Adaptive planning?
+
Adaptive planning adjusts plans based on changing requirements and feedback.
Advantages & disadvantages of agile
+
Agile enables faster delivery, better customer collaboration, flexibility to change, and improved product quality. However, it may lack predictability, require experienced teams, and may struggle with large distributed teams or fixed-budget environments.
Agile adoption challenges?
+
Challenges include resistance to change lack of management support poor collaboration and unclear roles.
Agile backlog refinement best practices?
+
Review backlog regularly prioritize items clarify requirements and break down large stories.
Agile backlog refinement frequency?
+
Typically done once per sprint to keep backlog up-to-date and prioritized.
Agile ceremonies?
+
Agile ceremonies include sprint planning daily stand-up sprint review and sprint retrospective.
Agile change management?
+
Agile change management handles requirement and process changes iteratively and collaboratively.
Agile coach?
+
An Agile coach helps teams and organizations adopt and improve Agile practices.
Agile continuous delivery?
+
Continuous delivery ensures software can be reliably released to production at any time.
Agile continuous feedback?
+
Continuous feedback ensures product and process improvements throughout development.
Agile continuous improvement?
+
Continuous improvement involves inspecting and adapting processes tools and practices regularly.
Agile cross-functional team benefit?
+
Cross-functional teams reduce handoffs improve collaboration and deliver faster.
Agile customer collaboration?
+
Customer collaboration involves stakeholders throughout the development process for feedback and alignment.
Agile customer value?
+
Customer value refers to delivering features and functionality that meet user needs and expectations.
Agile documentation?
+
Agile documentation is concise just enough to support development and collaboration.
Agile epic decomposition?
+
Breaking epics into smaller actionable user stories for implementation.
Agile estimation techniques?
+
Techniques include story points planning poker T-shirt sizing and affinity estimation.
Agile estimation?
+
Agile estimation is the process of predicting the effort or complexity of user stories or tasks.
Agile frameworks?
+
They are structured methods like Scrum, Kanban, SAFe, and XP that implement Agile principles in development.
Agile impediment?
+
Impediment is anything blocking the team from achieving its sprint goal.
Agile kanban vs scrum?
+
Scrum uses sprints and roles; Kanban is continuous and focuses on visualizing workflow and limiting WIP.
Agile key success factors?
+
Key factors include collaboration clear vision empowered teams adaptive planning and iterative delivery.
Agile manifesto?
+
Agile manifesto is a set of values and principles guiding Agile development.
Agile maturity model?
+
Agile maturity model assesses how effectively an organization applies Agile practices.
Agile methodology?
+
Agile is an iterative software development approach focusing on flexibility, customer collaboration, and incremental delivery through continuous feedback.
Agile metrics?
+
Agile metrics track team performance progress quality and predictability.
Agile mindset?
+
Agile mindset values collaboration flexibility continuous improvement and delivering customer value.
Agile mvp vs prototype?
+
MVP delivers minimal usable product; prototype is a preliminary model for validation and experimentation.
Agile pair programming?
+
Pair programming involves two developers working together at one workstation to improve code quality.
Agile portfolio management?
+
Portfolio management applies Agile principles to manage multiple projects and initiatives.
Agile process?
+
Agile process involves planning, developing in small increments, testing, review, and adapting based on feedback.
Agile product vision?
+
Product vision defines the long-term goal and direction of the product.
Agile project management?
+
Agile project management applies Agile principles to plan execute and deliver projects iteratively.
Agile quality assurance?
+
QA integrates testing early and continuously in the Agile development cycle.
Agile release planning horizon?
+
Defines a planning period for delivering features or increments usually several sprints.
Agile release planning?
+
Agile release planning defines a roadmap and schedule for delivering product increments over multiple sprints.
Agile release train?
+
Release train coordinates multiple teams to deliver value in a predictable schedule.
Agile retrospection action items?
+
Action items are improvements identified during retrospectives to implement in future sprints.
Agile retrospectives?
+
Retrospectives are meetings to reflect on the process discuss improvements and take action.
Agile risk management?
+
Agile risk management identifies assesses and mitigates risks iteratively during development.
Agile risk mitigation?
+
Risk mitigation involves identifying monitoring and addressing risks iteratively.
Agile roles and responsibilities?
+
Roles include Product Owner Scrum Master Development Team and Stakeholders.
Agile scaling challenges?
+
Challenges include coordination between teams consistent processes and maintaining Agile culture.
Agile servant leadership role?
+
Servant leader supports team autonomy removes impediments and fosters continuous improvement.
Agile sprint goal?
+
Sprint goal is a clear objective that guides the team's work during a sprint.
Agile stakeholder engagement?
+
Engaging stakeholders throughout development for feedback validation and alignment.
Agile team collaboration?
+
Team collaboration emphasizes communication transparency and shared responsibility.
Agile testing
+
Agile testing is a continuous testing approach aligned with Agile development. It focuses on early defect detection, customer feedback, and testing alongside development rather than after coding completes.
Agile testing?
+
Agile testing involves continuous testing throughout the development lifecycle.
Agile timeboxing benefit?
+
Timeboxing improves focus predictability and encourages timely delivery.
Agile?
+
Agile is a methodology for software development that emphasizes iterative development collaboration and flexibility to change.
Application binary interface
+
ABI defines how software components interact at the binary level. It standardizes function calls, data types, and machine interfaces.
Backlog grooming or refinement?
+
The process of reviewing, prioritizing, and estimating backlog items to ensure readiness for future sprints.
Backlog grooming/refinement?
+
Backlog grooming is the process of reviewing and prioritizing the product backlog.
Backlog prioritization?
+
Backlog prioritization determines the order of user stories based on value risk and dependencies.
Backlog refinement?
+
Ongoing process of reviewing, clarifying, and estimating backlog items to prepare them for future sprints.
Behavior-driven development (bdd)?
+
BDD involves writing tests in natural language to align development with business behavior.
Best time to use agile
+
Agile is ideal when requirements are evolving, the project needs frequent updates, and user feedback is essential. It suits dynamic environments and product-based development.
Build breaker mean?
+
A build breaker is an issue introduced into the codebase that causes the CI pipeline or build process to fail. It prevents deployment and needs immediate fixing before new features continue.
Burn-down chart?
+
A burn-down chart shows remaining work in a sprint or project over time.
Burn-up & burn-down charts
+
Burn-down charts show remaining work; burn-up charts track completed progress. Both help monitor sprint or project progress.
Burn-up chart?
+
A burn-up chart shows work completed versus total work in a project or release.
Can cross-functional teams work with external dependencies?
+
Yes, but dependencies should be managed with clear communication, planning, and incremental delivery.
Challenges in agile development
+
Unclear requirements, integration issues, team dependencies, cultural resistance, and estimation challenges are common.
Common agile metrics
+
Velocity, cycle time, burndown rate, lead time, defect density, and customer satisfaction are common metrics.
Common agile metrics?
+
Common metrics include velocity burn-down/burn-up charts cycle time lead time and cumulative flow.
Confluence page template?
+
Predefined layouts to standardize documentation like architecture diagrams, meeting notes, or requirements.
Confluence?
+
Confluence is a collaboration wiki platform for documenting requirements, architecture, and project knowledge.
Continuous delivery (cd)?
+
CD is the practice of automatically deploying code to production or staging after CI.
Continuous integration (ci)?
+
CI is the practice of frequently merging code changes to detect errors early.
Cross-functional team?
+
A cross-functional team has members with all skills needed to deliver a product increment.
Cross-functional team?
+
A team where members have different skills to complete a project from end to end, including development, testing, and design.
Cross-functional teams handle knowledge sharing?
+
Through pair programming, documentation, workshops, demos, and retrospectives.
Cross-functional teams important in agile?
+
They reduce handoffs, improve collaboration, accelerate delivery, and promote shared responsibility.
Cross-functional teams improve quality?
+
Integrated expertise reduces errors, promotes early testing, and ensures design and code quality throughout the sprint.
Cumulative flow diagram?
+
Visualizes work in different states over time, helping identify bottlenecks in workflow.
Cycle time?
+
Time taken from when work starts on a task until it is completed. Helps measure efficiency.
Daily stand-up meeting
+
A short 10–15 minute meeting where team members discuss what they completed, what they will do next, and any blockers. It improves transparency and collaboration.
Daily stand-up?
+
Daily stand-up is a short meeting where team members share progress plans and blockers.
Definition of done (dod)?
+
DoD is a shared agreement of what constitutes a completed user story or task.
Definition of done (dod)?
+
Criteria that a backlog item must meet to be considered complete, including code quality, testing, and documentation.
Definition of ready (dor)?
+
DoR defines conditions a user story must meet to be eligible for a sprint.
Definition of ready (dor)?
+
Criteria that a backlog item must meet before being pulled into a sprint. Ensures clarity and reduces blockers.
Diffbet a bug and a story in the backlog?
+
A bug represents a defect or error; a story is a new feature or enhancement. Both are tracked but may differ in priority.
Diffbet agile and devops?
+
Agile focuses on development process; DevOps focuses on development deployment and operations collaboration.
Diffbet agile and lean?
+
Agile focuses on iterative development; Lean focuses on waste reduction and process optimization.
Diffbet agile and waterfall?
+
Agile is iterative and flexible; Waterfall is sequential and rigid.
Diffbet burnup and burndown charts?
+
Burndown shows remaining work over time; burnup shows work completed and total scope over time.
Diffbet cross-functional and functional teams?
+
Cross-functional teams have multiple skill sets in one team; functional teams are organized by specialized roles.
Diffbet epic, feature, and user story?
+
Epic is a large goal, Feature is a smaller functionality, User Story is a detailed, implementable piece of work.
Diffbet jira and confluence?
+
Jira is for task and project tracking; Confluence is for documentation and knowledge management. Both integrate for traceability.
Diffbet product backlog and sprint backlog?
+
Product backlog is the full list of features, bugs, and enhancements. Sprint backlog is a subset selected for the sprint.
Diffbet scrum and kanban?
+
Scrum uses fixed sprints and roles; Kanban is continuous and focuses on workflow visualization.
Diffbet story points and hours?
+
Story points measure relative effort; hours estimate actual time to complete a task.
Diffbet waterfall and agile.
+
Waterfall is linear and sequential, while Agile is iterative and flexible. Agile adapts to change, whereas Waterfall requires full requirements upfront.
Difference agile vs scrum
+
Agile is a broader methodology mindset, while Scrum is a specific framework under Agile. Scrum uses roles, ceremonies, and sprints; Agile provides principles and values.
Epic in agile?
+
An Epic is a large user story that can be broken into smaller stories.
Epic, user stories & tasks
+
An epic is a large feature broken into user stories. A user story describes a requirement from the user's perspective, and tasks break stories into development activities.
Exploratory testing in agile?
+
Exploratory testing is an informal testing approach where testers learn and test simultaneously.
Four values of agile manifesto?
+
Values: individuals & interactions > processes & tools working software > documentation customer collaboration > contract negotiation responding to change > following a plan.
Impediment
+
A problem or blocker preventing a team from progressing. Scrum Master helps resolve it.
Importance of sprint retrospective?
+
To reflect on the sprint, identify improvements, and strengthen team collaboration and processes.
Importance of sprint review?
+
To demonstrate completed work, gather feedback, and validate alignment with business goals.
Important parts of agile process.
+
Backlog refinement, sprint cycles, continuous testing, customer involvement, retrospectives, and deployment.
Increment
+
An increment is the sum of completed product work at the end of a sprint, delivering potentially shippable functionality.
Incremental delivery?
+
Delivering working software in small, usable increments rather than waiting for a full release.
Incremental vs iterative delivery?
+
Incremental delivers small usable pieces, iterative improves them over cycles based on feedback.
Is velocity used in sprint planning?
+
Velocity is the average amount of work completed in previous sprints. It helps estimate how much the team can commit to in the current sprint.
Iteration in agile?
+
Iteration is a time-boxed cycle of development also known as a sprint.
Iterative & incremental development
+
Iterative development improves the system through repeated cycles, while incremental development delivers the system in small functional parts. Agile combines both to deliver working software early and refine it based on feedback.
Jira issue types?
+
Common types: Epic, Story, Task, Bug, Sub-task. Each represents a different level of work.
Jira workflow?
+
A sequence of statuses and transitions representing the lifecycle of an issue. Supports automation and approvals.
Jira?
+
Jira is a project management tool used for issue tracking, Agile boards, sprints, and backlog management.
Kanban
+
Kanban focuses on visual workflow management using a board and continuous delivery. Work-in-progress limits help efficiency.
Kanban board?
+
Kanban board visualizes work items workflow stages and progress.
Kanban wip limit?
+
WIP limit restricts the number of work items in progress to improve flow and reduce bottlenecks.
Key outputs of sprint planning?
+
Sprint backlog, sprint goal, task estimates, and commitment of the team to complete selected items.
Key principles of agile?
+
Key principles include customer collaboration responding to change working software and individuals and interactions over processes.
Lead time?
+
Time from backlog item creation to delivery. Useful for overall process efficiency.
Less?
+
LeSS (Large-Scale Scrum) extends Scrum principles to multiple teams working on the same product.
Long should sprint planning take?
+
Typically 2–4 hours for a 2-week sprint. Longer sprints may require more time proportionally.
Main roles in scrum
+
Scrum has three key roles: Product Owner, who manages backlog and priorities; Scrum Master, who ensures process compliance and removes blockers; and the Development Team, responsible for delivering increments every sprint.
Major agile components.
+
User stories, sprint planning, backlog, iterations, stand-up meetings, sprint reviews, and retrospectives.
Minimum viable product (mvp)?
+
MVP is the simplest version of a product that delivers value and can gather feedback.
Moscow prioritization?
+
MoSCoW prioritization categorizes backlog items as Must have Should have Could have and Won't have.
Nexus?
+
Nexus is a framework to scale Scrum across multiple teams with integrated work.
Obstacles to agile
+
Challenges include resistance to change, unclear requirements, lack of training, poor communication, distributed teams, and legacy constraints.
Often should backlog be refined?
+
Ongoing, but typically once per sprint, about 5–10% of the sprint time is used for grooming.
Other agile frameworks
+
Kanban, XP (Extreme Programming), SAFe, Crystal, and Lean are major frameworks besides Scrum.
Pair programming
+
Two developers work together on one workstation. It improves code quality, knowledge sharing, and reduces errors., QA collaborates from the start, writes acceptance criteria, tests continuously, and ensures quality through automation and feedback.
Participates in sprint planning?
+
The Scrum Master, Product Owner, and Development Team participate. PO clarifies backlog items, Dev Team estimates effort, and Scrum Master facilitates.
Planning poker
+
A collaborative estimation technique where teams assign story points using cards. Helps achieve shared understanding and consensus.
Planning poker?
+
Planning Poker is a consensus-based estimation technique using cards with story points.
Popular agile tools
+
Common Agile tools include Jira, Trello, Azure DevOps, Asana, Rally, Monday.com, and VersionOne. They help manage backlogs, tasks, sprints, and reporting.
Principles of agile testing
+
Principles include customer-focused testing, continuous feedback, early testing, frequent delivery, collaboration, and embracing change. Testing is seen as a shared responsibility, not a separate stage.
Product backlog?
+
The product backlog is a prioritized list of features enhancements and fixes for the product.
Product backlog?
+
An ordered list of features, bugs, and technical work maintained by the Product Owner. It evolves continuously as requirements change.
Product increment?
+
Product increment is the sum of all completed work in a sprint that meets the definition of done.
Product owner?
+
Product Owner represents stakeholders manages the backlog and ensures value delivery.
Product roadmap
+
A strategic plan outlining vision, milestones, timelines, and prioritized features for product development.
Purpose of sprint planning
+
Sprint planning determines sprint goals, selects backlog items, and defines how the work will be completed.
Qualities of a scrum master
+
A Scrum Master should have communication and facilitation skills, problem-solving ability, servant leadership mindset, patience, and knowledge of Agile principles to guide the team effectively.
Qualities of an agile tester
+
An Agile tester should be collaborative, adaptable, and proactive. They must understand business requirements, communicate well, and focus on continuous improvement and quick feedback cycles.
Refactoring
+
Refactoring improves existing code without changing its external behavior. It enhances readability, performance, and maintainability while reducing technical debt.
Release candidate
+
A nearly completed product version ready for final testing and approval before release.
Responsible for backlog management?
+
The Product Owner is primarily responsible, with input from stakeholders and the development team.
Retrospectives improve delivery?
+
They help identify process improvements, bottlenecks, and team collaboration issues to improve future sprints.
Role of scrum master in sprint planning?
+
Facilitates discussion, ensures clarity, prevents scope creep, and promotes team collaboration.
Role of the scrum master in cross-functional teams?
+
Facilitates collaboration, removes impediments, and promotes self-organization among team members.
Safe?
+
SAFe (Scaled Agile Framework) is a framework to scale Agile practices across large enterprises.
Scaling agile?
+
Scaling Agile applies Agile practices across multiple teams or large projects.
scrum & kanban used?
+
Scrum is used where work is iterative with evolving requirements, such as software development and product improvement. Kanban is used in support, maintenance, DevOps, and continuous delivery environments where work is flow-based rather than sprint-based.
Scrum cycle length
+
A scrum cycle, or sprint, usually lasts 1–4 weeks. The duration remains consistent throughout the project.
Scrum master?
+
Scrum Master facilitates Scrum processes removes impediments and supports the team.
Scrum of scrums
+
A technique used when multiple scrum teams work together. Representatives meet to coordinate dependencies and align progress.
Scrum?
+
Scrum is an Agile framework that uses roles events and artifacts to manage complex projects.
Servant leadership?
+
Servant leadership focuses on supporting and enabling the team rather than directing it.
Spike & zero sprint
+
A spike is research activity to resolve uncertainty or technical issues. Zero sprint (Sprint 0) involves initial setup activities like architecture, environment, and backlog preparation before development.
Spike?
+
A spike is a time-boxed research activity to explore a solution or reduce uncertainty.
Spotify model?
+
Spotify model organizes Agile teams as squads tribes chapters and guilds to foster autonomy and alignment.
Sprint backlog vs product backlog
+
The product backlog contains all requirements prioritized by the product owner, while the sprint backlog contains the selected items for the current sprint. Sprint backlog is short-term; product backlog is long-term.
Sprint backlog?
+
The sprint backlog is a subset of the product backlog selected for implementation in a sprint.
Sprint delivery?
+
Sprint delivery is the completion and demonstration of committed backlog items to stakeholders at the end of a sprint.
Sprint goal?
+
A short description of what the sprint aims to achieve. It guides the team and aligns stakeholders.
Sprint planning, review & retrospective
+
Sprint planning defines sprint goals and backlog. Sprint review demonstrates work to stakeholders. Retrospective reflects on improvements.
Sprint planning?
+
Sprint planning is a meeting where the team decides what work will be done in the upcoming sprint.
Sprint planning?
+
Sprint Planning is a Scrum ceremony where the team decides which backlog items to work on in the upcoming sprint. It defines the sprint goal and estimated tasks.
Sprint retrospective?
+
Sprint retrospective is a meeting to reflect on the sprint and identify improvements.
Sprint review?
+
Sprint review is a meeting to demonstrate completed work to stakeholders and gather feedback.
Sprint?
+
A sprint is a time-boxed iteration usually 1-4 weeks where a set of work is completed.
Story points
+
A unit for estimating effort or complexity in Scrum, not tied to time. Helps predict workload and sprint capacity.
Story points?
+
Story points are relative measures of effort complexity or risk for user stories.
Team velocity tracking?
+
Tracking velocity helps predict how much work a team can complete in future sprints.
Technical debt?
+
Technical debt is the cost of shortcuts or suboptimal solutions that need refactoring later.
Test-driven development (tdd)
+
TDD involves writing tests before writing code. It ensures better design, reduces bugs, and supports regression testing.
Test-driven development (tdd)?
+
TDD is a practice where tests are written before the code to ensure functionality meets requirements.
Theme in agile?
+
A theme is a collection of related user stories or epics around a common objective.
Time-boxing?
+
Time-boxing is allocating a fixed duration to activities to improve focus and productivity.
To balance stakeholder requests in backlog?
+
Evaluate based on business value, urgency, dependencies, and capacity. Communicate trade-offs transparently.
To control permissions in confluence?
+
Set space-level or page-level permissions for viewing, editing, or commenting based on user roles or groups.
To create a kanban board in jira?
+
Create a board from project → select Kanban → configure columns → add issues for workflow tracking.
To handle unplanned work during a sprint?
+
Minimize interruptions. If unavoidable, negotiate scope adjustments with PO and team. Track and learn for future planning.
To link jira issues in confluence?
+
Use Jira macro to embed issues, sprints, or reports directly into Confluence pages.
To track progress in jira?
+
Use dashboards, reports, burndown charts, and cumulative flow diagrams.
Tracer bullet
+
A technique delivering a thin working slice of the system early to validate architecture and direction.
Types of agile methodology.
+
Scrum, Kanban, XP (Extreme Programming), Lean, SAFe, and Crystal are popular Agile variants.
Types of burn-down charts
+
Types include sprint burndown, release burndown, and product burndown charts. Each offers different timelines and scope levels.
Use agile
+
Avoid Agile in fixed-scope, fixed-budget projects, strict compliance domains, or when customer feedback is unavailable.
Use waterfall instead of scrum
+
Use Waterfall when requirements are fixed, documentation-heavy, regulated, and no major changes are expected. It fits infrastructure or hardware projects better.
User story?
+
A user story is a short simple description of a feature from the perspective of an end user.
Velocity in agile
+
Velocity measures the amount of work a team completes in a sprint, typically in story points. It helps estimate future sprint capacity and planning.
Velocity in agile?
+
Velocity measures the amount of work a team completes in a sprint.
Velocity?
+
Velocity measures the amount of work a team completes in a sprint, often in story points. Helps with forecasting.
You balance speed and quality in delivery?
+
Prioritize well-defined backlog items, maintain testing standards, and avoid overcommitment.
You communicate delivery status to stakeholders?
+
Use sprint reviews, dashboards, Jira reports, and release notes for transparency.
You ensure effective communication in cross-functional teams?
+
Daily stand-ups, retrospectives, sprint reviews, shared documentation, and collaboration tools help maintain transparency.
You ensure quality in delivery?
+
Unit tests, code reviews, automated testing, CI/CD pipelines, and adherence to Definition of Done.
You ensure team accountability?
+
Transparent commitments, daily stand-ups, peer reviews, and clear Definition of Done.
You ensure timely delivery?
+
Clear sprint goals, proper estimation, daily tracking, and removing blockers proactively help ensure on-time delivery.
You estimate tasks in sprint planning?
+
Using story points, ideal hours, or T-shirt sizing. Estimation considers complexity, effort, and risk.
You handle blocked tasks?
+
Identify blockers early, escalate if needed, and collaborate to remove impediments quickly.
You handle changing priorities mid-sprint?
+
Limit mid-sprint changes; negotiate with PO, document impact, and adjust future sprint planning.
You handle conflicts in cross-functional teams?
+
Encourage open communication, identify root causes, facilitate discussions, and align on shared goals.
You handle incomplete stories at sprint end?
+
Move them back to backlog, review root cause, and include in future sprints after re-estimation.
You handle skill gaps in cross-functional teams?
+
Encourage knowledge sharing, mentoring, pair programming, and cross-training to build team capability.
You handle technical debt in backlog?
+
Track and prioritize technical debt items along with functional stories to ensure system maintainability.
You handle urgent production issues during a sprint?
+
Address them immediately if critical, or plan within sprint buffer. Document impact on sprint goals.
You improve team collaboration?
+
Facilitate open communication, collaborative tools, clear goals, and regular retrospectives.
You manage dependencies across teams?
+
Identify dependencies early, communicate timelines, and coordinate during planning and stand-ups.
You manage scope creep during a sprint?
+
Freeze the sprint backlog, handle new requests in the next sprint, and communicate priorities clearly.
You measure productivity in cross-functional teams?
+
Use velocity, cycle time, burndown charts, quality metrics, and stakeholder feedback.
You measure successful delivery?
+
Completion of sprint backlog, meeting Definition of Done, stakeholder satisfaction, and business value delivered.
You measure team performance?
+
Velocity, quality metrics, stakeholder satisfaction, sprint predictability, and adherence to Definition of Done.
You prioritize backlog items?
+
Using MoSCoW (Must, Should, Could, Won’t), business value, risk, dependencies, and ROI.
You track multiple sprints simultaneously?
+
Use program boards, Jira portfolios, or scaled Agile tools like SAFe to visualize cross-team progress.
You track sprint progress?
+
Use burndown charts, task boards, and daily stand-ups to monitor completed versus remaining work.

Angular

+
:host property in CSS
+
:host targets the component’s root element from within its CSS., Allows styling the host without affecting other components.
Activated route?
+
ActivatedRoute provides info about the current route., Access route params, query params, fragments, and data., Injected into components via constructor.
Active router links?
+
Active links are highlighted when the route matches the current URL., Use routerLinkActive directive:, Home, This helps in UI feedback for navigation.
Add web workers in your application?
+
Use Angular CLI: ng generate web-worker ., Update angular.json and enable TypeScript worker configuration., Offloads heavy computation to background threads for performance.
Advantages and disadvantages of Angular
+
Advantages: Component-based, TypeScript, SPA support, tooling., Disadvantages: Steep learning curve, larger bundle size, complex for small apps.
Advantages of Angular over other frameworks
+
Two-way data binding reduces boilerplate code., Dependency injection improves modularity., Rich ecosystem, TypeScript support, and reusable components.
Advantages of Angular over other frameworks
+
Strong TypeScript support., Declarative templates with data binding., Rich ecosystem and official libraries (Material, Forms, RxJS)., Modular, testable, and maintainable code.
Advantages of Angular over React
+
Angular is a full-fledged framework, React is a library., Built-in support for forms, routing, and HTTP., Strong TypeScript integration for better type safety.
Advantages of Angular?
+
Two-way data binding, modularity, dependency injection, TypeScript support, and powerful CLI.
Advantages of AOT
+
Faster app startup., Smaller bundle size., Detects template errors at build time., Better security by compiling templates ahead of time.
Advantages of Bazel tool
+
Faster builds with caching, Parallel execution, Language-agnostic support, Scales well for monorepos
Angular Animation?
+
Angular Animation allows creating smooth UI animations in components., Built on Web Animations API with @angular/animations., Supports transitions, keyframes, triggers, and states for dynamic effects.
Angular application work?
+
Angular apps run in the browser., Templates define UI, components handle logic, and services manage data., Data binding updates the view dynamically when the model changes.
Angular Architecture Diagram
+
Angular architecture includes:, Modules (NgModule), Components (UI + logic), Templates (HTML), Directives (behavior), Services (business logic), Dependency Injection and Routing
Angular Authentication and Authorization
+
Authentication: Verify user identity (login, JWT)., Authorization: Control access to resources/routes based on roles., Implemented using guards, tokens, and HttpInterceptors.
Angular CLI Builder?
+
Angular CLI Builder is a customizable build pipeline tool., It allows modifying build, serve, and test processes., Used to extend or replace default Angular CLI behavior.
Angular CLI?
+
Angular CLI is a command-line tool to scaffold, build, and maintain Angular applications.
Angular CLI?
+
Angular CLI is a command-line tool for Angular projects., Used to generate components, modules, services, and run builds., Simplifies scaffolding and deployment tasks.
Angular compiler?
+
Transforms Angular TypeScript and templates into JavaScript., Includes AOT and JIT compilers., Generates code for change detection and view rendering.
Angular DSL?
+
DSL (Domain-Specific Language) in Angular refers to template syntax., It allows declarative UI using HTML with Angular directives., Includes *ngIf, *ngFor, interpolation, and bindings.
Angular Elements
+
Angular Components packaged as custom HTML elements., Can be used outside Angular apps., Supports inputs, outputs, and encapsulation.
Angular expressions vs JavaScript expressions
+
Angular expressions are evaluated in the scope context and are safe., No loops, conditionals, or global access., JS expressions can access any variable or perform complex operations.
Angular finds components, directives, and pipes
+
Compiler scans NgModule declarations., Generates factories and resolves templates and dependencies.
Angular Framework?
+
Angular is a TypeScript-based front-end framework for building dynamic single-page applications (SPAs)., It provides features like components, data binding, dependency injection, and routing., Maintains a modular architecture and encourages reusable code., It supports both client-side rendering and progressive web apps.
Angular introduced as a client-side framework?
+
To create dynamic SPAs with fast user interactions., Reduces server load by rendering templates on the client., Provides data binding, modularity, and reusable components.
Angular Ivy?
+
Ivy is the new rendering engine in Angular., It improves build size, speed, and runtime performance., Supports AOT compilation, better debugging, and improved type checking.
Angular Language Service?
+
Provides editor support like autocomplete, type checking, and error detection for Angular templates., Helps developers write Angular code faster and with fewer mistakes.
Angular library
+
Reusable module/package with components, directives, services., Can be published and shared via npm.
Angular Material mean?
+
Angular Material is a UI component library implementing Google’s Material Design., Provides pre-built components like buttons, tables, forms, and dialogs., Enhances UI consistency and responsiveness.
Angular Material?
+
A UI component library for Angular apps., Provides pre-built, responsive, and accessible components., Includes buttons, forms, tables, navigation, and themes.
Angular Material?
+
Official UI component library for Angular., Provides modern, accessible, and responsive UI components.
Angular render on server-side?
+
Yes, using Angular Universal., Enables SSR for SEO and faster initial load.
Angular Router?
+
Angular Router allows navigation between views/components., It maps URLs to components., Supports nested routes, lazy loading, and route guards., Enables single-page application (SPA) behavior.
Angular security model for preventing XSS attacks
+
Angular automatically escapes interpolated content., Sanitizes URLs, HTML, and styles in templates., Prevents injection attacks on the DOM.
Angular Signals with an example
+
import { signal } from '@angular/core';, const count = signal(0);, count.set(5); // Updates reactive value, count.subscribe(val => console.log(val));, When count changes, subscribed components update automatically.
Angular Signals?
+
Signals are reactive primitives to track state changes., They allow automatic UI updates when values change.
Angular simplifies Internationalization (i18n)
+
Provides built-in i18n support, translation files, and pipes., Supports pluralization, locale formatting, and dynamic translations., CLI helps extract and compile translations.
Angular Universal?
+
Angular Universal enables server-side rendering for SEO and faster load times.
Angular Universal?
+
Angular Universal enables server-side rendering (SSR) of Angular apps., Improves SEO and performance., Pre-renders HTML on the server before sending to client.
Angular uses client-side rendering by default
+
True. Angular renders templates in the browser using JavaScript., Server-side rendering (Angular Universal) is optional.
Angular?
+
Angular is a platform and framework for building single-page client applications using HTML and TypeScript.
Angular?
+
Angular is a TypeScript-based front-end framework., Used to build single-page applications (SPAs)., Supports components, modules, services, and reactive programming.
Annotations in Angular
+
Older term for decorators in AngularJS., Used to attach metadata to classes or functions., Helps framework know how to process the component.
AOT Compilation and advantages
+
Compiles templates during build time., Catches template errors early, reduces bundle size, improves performance.
AOT compilation? Advantages?
+
AOT (Ahead-of-Time) compiles Angular templates during build time., Advantages: Faster rendering, smaller bundle size, early error detection, and better security.
AOT compiler
+
Ahead-of-Time compiler compiles templates during build, not runtime., Reduces bundle size, improves performance, and catches template errors early.
AOT?
+
AOT compiles Angular templates during build., Generates optimized JavaScript before the app loads., Improves performance and reduces runtime errors.
Applications of HTTP interceptors
+
Add authentication tokens, logging, error handling, caching., Modify request/response globally., Handle API versioning or header manipulation.
Are all components generated in production build?
+
Only components referenced or reachable from templates and routes are included., Unused components are tree-shaken.
Are multiple interceptors supported in Angular?
+
Yes, interceptors are executed in the order provided., Each can pass control to the next using next.handle().
AsyncPipe in Angular?
+
AsyncPipe subscribes to Observables/Promises in templates and handles unsubscription automatically.
Bazel tool?
+
Bazel is a build and test tool developed by Google., It handles large-scale projects efficiently., Supports incremental builds and caching.
BehaviorSubject in Angular?
+
BehaviorSubject stores current value and emits it to new subscribers.
Benefit of Automatic Inlining of Fonts
+
Embeds fonts directly into CSS to reduce network requests., Improves page load speed and performance., Enhances First Contentful Paint (FCP) metrics.
Best practices for security in Angular
+
Use sanitization, HttpClient, and Angular templates safely., Avoid innerHTML for untrusted content., Enable Content Security Policy (CSP) and HTTPS.
Bootstrapped component?
+
Root component loaded by Angular to start the application., Declared in bootstrap array of AppModule.
Bootstrapping module?
+
The bootstrapping module initializes the Angular application., It is usually the root module (AppModule) loaded by main.ts., It sets up the root component and starts the application., It imports other modules required for app startup.
Bootstrapping module?
+
It is the root Angular module that launches the application., Defined with @NgModule and bootstrap array., Typically called AppModule.
Browser support for Angular
+
Supports latest Chrome, Firefox, Edge, Safari., IE11 support is deprecated in recent Angular versions., Modern Angular relies on evergreen browsers for features.
Browser support of Angular Elements
+
Supported in all modern browsers (Chrome, Firefox, Edge, Safari)., Polyfills may be needed for IE11.
Builder?
+
A Builder is a class or script that executes a specific task in Angular CLI., It can run builds, tests, linting, or deploy tasks., Provides flexibility to customize CLI workflows.
Building blocks of Angular?
+
Angular is built using several key components: Components (UI control), Modules (grouping functionality), Templates (HTML with Angular bindings), Services (business logic), and Dependency Injection. These work together to build scalable single-page applications.
Can you read full response?
+
Use { observe: 'response' } with HttpClient:, this.http.get('api/users', { observe: 'response' }).subscribe(resp => console.log(resp.status, resp.body));, It returns headers, status, and body.
Case types in Angular?
+
Angular uses naming conventions:, camelCase for variables and functions, PascalCase for classes and components, kebab-case for selectors and filenames, This ensures consistency and readability.
Categorize data binding types?
+
One-way binding: Interpolation, property, event, Two-way binding: [(ngModel)], Enables dynamic updates between component and view.
Chain pipes?
+
Multiple pipes can be applied sequentially using |., Example: {{ name | uppercase | slice:0:5 }}, Output is passed from one pipe to the next.
Change Detection and how does it work?
+
Change Detection tracks updates in component data and updates the view., Angular checks the component tree for changes automatically., It works via Zones and triggers re-rendering when a model changes., Helps keep UI and data synchronized.
Change detection in Angular?
+
Change detection tracks changes in application state and updates the DOM accordingly.
Change settings of zone.js
+
Configure zone.js flags before import in polyfills:, (window as any).__Zone_disable_X = true;, Controls patching of timers, events, or async operations.
Choose an element from a component template?
+
Use ViewChild or ViewChildren decorators., Example: @ViewChild('myElement') element: ElementRef;, Access DOM elements directly in component class.
Class decorators in Angular?
+
Class decorators attach metadata to a class., Common ones: @Component, @Directive, @Injectable, @NgModule., They define how the class behaves in Angular’s DI and rendering system.
Class decorators?
+
Class decorators define metadata for classes., Example: @Injectable() marks a class for dependency injection.
Class field decorators?
+
Class field decorators annotate properties of a class., Examples: @Input(), @Output(), @ViewChild()., They help Angular bind data, access DOM, or communicate between components.
Classes that should not be added to declarations
+
Services, Modules, Non-Angular classes, Declarations should include components, directives, and pipes only.
Client-side frameworks like Angular were introduced?
+
To create dynamic, responsive web apps without reloading pages., They handle data binding, DOM manipulation, and routing on the client side., Improves performance and user experience.
Code for creating a decorator.
+
A basic Angular decorator example:, function Log(target, key) {, console.log(`Property ${key} was accessed`);, }, Decorators enhance or modify class behavior during runtime.
Codelyzer?
+
Codelyzer is a static analysis tool for Angular projects., It checks for coding style, best practices, and template errors., Used with TSLint for linting Angular apps.
Collection?
+
In Angular, a collection is a group of objects like arrays, sets, or maps., Used to store and iterate over data in templates using ngFor.
Compare service() and factory() functions.
+
service() returns an instantiated singleton object and is created using a constructor function. factory() allows returning a custom object, function, or primitive and provides more flexibility. Both are used for sharing reusable logic across components.
Compilation process?
+
Transforms Angular templates and metadata into efficient JavaScript., Ensures type safety and detects template errors., Optimizes the app for performance.
Component Decorator?
+
@Component defines a class as an Angular component., Specifies metadata like selector, template, and styles., Registers the component with Angular’s module system.
Component Test Harnesses?
+
A test API for Angular Material components., Allows interacting with components in tests without relying on DOM selectors., Provides a clean and maintainable way to write unit tests.
Components in Angular?
+
Components are building blocks of Angular applications that control a part of the UI.
Components, Modules, and Services in Angular
+
Component: UI + logic., Module: Groups components, directives, and services., Service: Provides reusable business logic, injected via dependency injection.
Components?
+
Components are building blocks of Angular apps., They contain template, class (logic), and metadata., Responsible for rendering views and handling user interaction.
Concept of Dependency Injection (DI).
+
DI provides class dependencies automatically via Angular’s injector., Reduces manual instantiation and promotes testability., Example: Injecting a service into a component constructor.
Configure injectors with providers at different levels
+
Root injector: App-wide singleton (providedIn: 'root')., Module injector: Module-specific., Component injector: Scoped to component and children.
Content projection?
+
Mechanism to pass content from parent to child component., Allows child components to display dynamic content from parent templates.
Create a standalone component manually
+
Set standalone: true in the component decorator:, @Component({, selector: 'app-my-component',, standalone: true,, templateUrl: './my-component.html', }), export class MyComponent {}
Create a standalone component using CLI
+
Run: ng generate component my-component --standalone., Generates a component without declaring it in a module.
Create an app shell in Angular?
+
Use Angular CLI command: ng add @angular/pwa to enable PWA features., Then run ng generate app-shell --client-project ., It generates server-side rendered shell for faster initial load., App shell improves performance and perceived loading speed.
Create directives using CLI
+
Run:, ng generate directive myDirective, Generates directive file with @Directive decorator ready to use.
Create displayBlock components
+
Use display: block in component CSS or
wrapper., Angular itself does not require special syntax; it relies on CSS.
Create schematics for libraries?
+
Use Angular CLI command: ng generate schematic , Define rules to create components or modules in the library., Automates repetitive tasks in library development.
Custom elements
+
Custom elements are browser-native HTML elements defined by developers., They encapsulate functionality and can be reused like standard tags.
Custom elements work internally
+
Angular wraps a component in custom element class., Manages inputs/outputs, change detection, and lifecycle hooks., Element behaves like a standard HTML tag.
Custom pipe?
+
Custom pipe is a user-defined pipe to transform data., Created using @Pipe decorator and implementing PipeTransform., Useful for app-specific formatting or logic.
Data binding in Angular
+
Synchronizes data between component and template., Can be one-way or two-way., Reduces manual DOM manipulation.
Data binding in Angular?
+
Data binding synchronizes data between the component class and template.
Data binding?
+
Data binding connects component class with template/view., Types include one-way (interpolation, property, event) and two-way binding., Enables dynamic UI updates.
Data Binding? In how many ways can it be executed?
+
Data binding connects data between the component and the UI. Angular supports four main types: Interpolation ({{ }}), Property Binding ([ ]), Event Binding (( )), and Two-way Binding ([( )]) using ngModel.
Deal with errors in observables?
+
Use the catchError operator in RxJS., Handle errors inside subscribe via error callback., Example:, observable.pipe(catchError(err => of([]))).subscribe(...)
Declarable in Angular?
+
Declarable refers to classes that can be declared in an NgModule., Includes Components, Directives, and Pipes., They define UI behavior or transformations in templates.
Decorator in Angular?
+
Decorator is a function that adds metadata to classes, e.g., @Component, @Injectable.
Decorators in Angular
+
Decorators provide metadata to classes, methods, or properties., Types: @Component, @Injectable, @Directive, @Pipe., They enable Angular features like dependency injection and templates.
Define routes?
+
Routes are defined using a Routes array:, const routes: Routes = [, { path: 'home', component: HomeComponent },, { path: 'about', component: AboutComponent }, ];, Configured via RouterModule.forRoot(routes).
Define the ng-content Directive
+
Allows content projection into a child component., Acts as a placeholder for parent-provided HTML content.
Define typings for custom elements
+
Create a .d.ts file declaring:, interface HTMLElementTagNameMap { 'my-element': MyComponentElement; }, Ensures TypeScript type checking.
Dependency Hierarchy formed?
+
Angular forms a tree hierarchy of injectors., Root injector provides global services., Child components can have component-level injectors., Services are resolved from closest injector upwards.
Dependency Injection
+
DI is a design pattern to inject dependencies into components/services., Promotes loose coupling and testability., Angular has a built-in DI system.
Dependency injection in Angular?
+
DI is a design pattern where a class receives its dependencies from an external source rather than creating them.
Dependency injection in Angular?
+
Dependency Injection (DI) provides services or objects to components automatically., Avoids manual creation of service instances., Promotes modularity and testability.
Dependency injection tree in Angular?
+
Hierarchy of injectors controlling service scope and lifetime.
Describe the MVVM architecture
+
Model-View-ViewModel separates data, UI, and logic., Angular components act as ViewModel, templates as View, services/models as Model.
Describe various dependencies in Angular application?
+
Dependencies are described using constructor injection in services or components., Decorators like @Injectable() and @Inject() define provider rules., Angular’s DI system manages the lifecycle and resolution of dependencies.
Design goals of Service Workers
+
Offline-first experience, Background sync and push notifications, Improved performance and caching strategies, Enhancing reliability and responsiveness
Detect route change in Angular?
+
Subscribe to Router events:, this.router.events.subscribe(event => { /* handle NavigationEnd */ });, You can use ActivatedRoute to detect parameter changes., Useful for executing logic on route transitions.
DI token?
+
DI token is a key used to inject a dependency in Angular’s DI system., Can be a type, string, or InjectionToken., Helps Angular locate and provide the correct service or value.
DifBet ActivatedRoute and Router?
+
ActivatedRoute provides info about current route; Router is used to navigate programmatically.
DifBet Angular Elements and Angular Components?
+
Angular Elements are Angular components packaged as custom elements to use in non-Angular apps.
DifBet Angular Material and Bootstrap?
+
Angular Material provides Angular components with Material Design; Bootstrap is CSS framework.
DifBet Angular service and singleton service?
+
Service is reusable class; singleton ensures a single instance application-wide using providedIn: 'root'.
DifBet Angular Service Worker and Service Worker API?
+
Angular Service Worker integrates with Angular for PWA features; Service Worker API is native browser API.
DifBet AngularJS and Angular?
+
AngularJS is based on JavaScript (v1.x); Angular (v2+) is based on TypeScript and component-based architecture.
DifBet CanActivate and CanDeactivate guards?
+
CanActivate controls route access; CanDeactivate controls leaving a route.
DifBet catchError and retry operators in RxJS?
+
catchError handles errors; retry retries failed requests a specified number of times.
DifBet Content Projection and ViewChild?
+
Content Projection inserts external content into component; ViewChild accesses component's template elements.
DifBet debounceTime() and throttleTime()?
+
debounceTime waits until silence; throttleTime emits at most once in time interval.
DifBet declarations and imports in NgModule?
+
Declarations define components, directives, pipes within module; imports bring in other modules.
DifBet eagerly loaded and lazy loaded modules?
+
Eager modules load at app startup; lazy modules load on demand.
DifBet FormControl, FormGroup, and FormArray?
+
FormControl represents a single input; FormGroup groups controls; FormArray is a dynamic array of controls.
DifBet forwardRef and Injector in Angular?
+
forwardRef allows referencing classes before declaration; Injector provides DI manually.
DifBet HttpClientModule and HttpModule?
+
HttpModule is deprecated; HttpClientModule is modern and supports typed responses and interceptors.
DifBet map() and switchMap()?
+
map transforms values; switchMap cancels previous inner observable and switches to new observable.
DifBet NgFor and NgForOf?
+
NgFor is the structural directive; NgForOf is the underlying implementation for iterables.
DifBet ngIf else and ngSwitch?
+
ngIf else conditionally renders templates; ngSwitch selects among multiple templates.
DifBet ngOnChanges and ngDoCheck?
+
ngOnChanges is triggered by input property changes; ngDoCheck is called on every change detection cycle.
DifBet ng-template and ng-container?
+
ng-template defines reusable template; ng-container is a logical container that doesn't render in DOM.
DifBet NgZone and ChangeDetectorRef?
+
NgZone manages async operations and triggers change detection; ChangeDetectorRef manually triggers change detection.
DifBet OnPush and Default change detection strategy?
+
Default checks all components every cycle; OnPush checks only when input reference changes.
DifBet OnPush and Default change detection?
+
OnPush runs only when inputs change; Default runs on every change detection cycle.
DifBet Promise and Observable in Angular?
+
Promise handles single async value; Observable handles multiple values over time with operators.
DifBet providedIn: 'root' and providedIn: 'any'?
+
'root' provides singleton service globally; 'any' provides separate instances for lazy-loaded modules.
DifBet providers and imports in NgModule?
+
Providers register services with DI; imports bring in other modules.
DifBet pure and impure pipes?
+
Pure pipes are executed only when input changes; impure pipes run on every change detection cycle.
DifBet PurePipe and ImpurePipe?
+
PurePipe executes only when input changes; ImpurePipe executes every change detection.
DifBet Renderer and Renderer2?
+
Renderer2 is the updated, safer API for DOM manipulation in Angular 4+.
DifBet Renderer2 and ElementRef?
+
Renderer2 provides safe DOM manipulation; ElementRef directly accesses native element (less safe).
DifBet resolvers and guards?
+
Resolvers fetch data before route activation; guards determine access.
DifBet routerLink and href?
+
routerLink navigates without page reload using Angular router; href reloads the page.
DifBet static and dynamic components?
+
Static components are declared in template; dynamic components are created programmatically using ComponentFactoryResolver.
DifBet structural and attribute directives?
+
Structural changes DOM layout; attribute changes element behavior or style.
DifBet Subject and EventEmitter?
+
EventEmitter extends Subject and is used for @Output in components.
DifBet template-driven and reactive forms in terms of validation?
+
Template-driven uses directives and template validation; Reactive uses form controls and programmatic validation.
DifBet template-driven and reactive forms?
+
Template-driven forms are simple and rely on directives; reactive forms are more powerful, programmatically created, and use FormBuilder.
DifBet templateRef and viewContainerRef?
+
TemplateRef represents embedded template; ViewContainerRef represents container to insert views.
DifBet ViewChild and ContentChild?
+
ViewChild references elements/components in template; ContentChild references projected content.
DifBet ViewEncapsulation.None, Emulated, and ShadowDom?
+
None: no encapsulation; Emulated: scoped styles; ShadowDom: uses native shadow DOM.
DifBet window.history and Angular Router?
+
window.history manipulates browser history; Angular Router manages SPA routes without full page reload.
DiffBet Angular and AngularJS
+
AngularJS (1.x) uses JavaScript and MVC., Angular (2+) uses TypeScript, components, and modules., Angular is faster, modular, and supports Ivy compiler.
DiffBet Angular and Backbone.js
+
Angular: MVVM, components, DI, two-way binding., Backbone.js: Lightweight, MVC, manual DOM manipulation., Angular offers more structured development and tooling.
DiffBet Angular and jQuery
+
Angular: Full SPA framework, two-way binding, MVVM., jQuery: DOM manipulation library, no architecture.
DiffBet Angular expressions and JavaScript expressions
+
Angular expressions are safe and auto-sanitized., Run within Angular context and cannot use loops or exceptions.
DiffBet AngularJS and Angular?
+
AngularJS is JavaScript-based and uses MVC architecture., Angular (2+) is TypeScript-based, faster, modular, and uses components., Angular supports mobile development and modern tooling., Angular has better performance, AOT compilation, and enhanced dependency injection.
DiffBet Annotation and Decorator
+
Annotation: Metadata in older frameworks., Decorator (Angular): Adds metadata and behavior to classes, properties, or methods.
DiffBet Component and Directive
+
Component: Has template + logic, renders UI., Directive: No template, modifies DOM behavior., Component is a type of directive with a view.
DiffBet constructor and ngOnInit
+
constructor: Instantiates the class, used for dependency injection., ngOnInit: Lifecycle hook, executes after inputs are initialized., Use ngOnInit for initialization logic instead of constructor.
DiffBet interpolated content and innerHTML
+
Interpolation ({{ }}) is automatically sanitized by Angular., innerHTML can bypass sanitization if used with untrusted content., Interpolation is safer for user-generated content.
DiffBet ngIf and hidden property
+
ngIf adds/removes element from DOM., [hidden] hides element but keeps it in DOM., Use ngIf for conditional rendering and hidden for styling.
DiffBet NgModule and JavaScript module
+
NgModule defines Angular metadata (components, directives, services)., JavaScript module only exports/imports variables or classes.
DiffBet promise and observable
+
Promise: Handles single async value; executes immediately., Observable: Can emit multiple values over time; lazy execution., Observable supports operators, cancellation, and chaining.
DiffBet pure and impure pipe
+
Pure Pipe: Executes only when input changes; optimized for performance., Impure Pipe: Executes on every change detection; can handle complex scenarios., Impure pipes can cause performance overhead.
Differences between AngularJS and Angular
+
AngularJS: JS-based, uses MVC, two-way binding., Angular: TypeScript-based, component-driven, improved performance., Angular has better mobile support and modular architecture.
Differences between AngularJS and Angular for DI
+
AngularJS uses function-based injection with $inject., Angular uses class-based injection with @Injectable() decorators., Angular DI supports hierarchical injectors and tree-shakable services.
Differences between reactive and template-driven forms
+
Reactive: Model-driven, synchronous, testable., Template-driven: Template-driven, simpler, less scalable., Reactive supports dynamic controls; template-driven does not.
Differences between various versions of Angular
+
AngularJS (1.x) is JavaScript-based and uses MVC., Angular 2+ is TypeScript-based, component-driven, modular, and faster., Later versions added Ivy compiler, CLI improvements, RxJS updates, and stricter type checking., Each version focuses on performance, security, and tooling enhancements.
Different types of compilation in Angular
+
JIT (Just-in-Time): Compiles in the browser at runtime., AOT (Ahead-of-Time): Compiles at build time.
Different ways to group form controls
+
FormGroup: Groups multiple controls logically., FormArray: Groups controls dynamically as an array., Nested FormGroups for hierarchical structures.
Digest cycle in AngularJS.
+
The digest cycle is the internal process where AngularJS checks for model changes and updates the view. It compares current and previous values in watchers and continues until all bindings stabilize. It runs automatically during events handled by Angular.
Directive in Angular?
+
Directive is a class that can modify DOM behavior or structure.
Directives in Angular
+
Directives are instructions for the DOM., Types: Attribute, Structural (*ngIf, *ngFor), and Custom directives., They modify the behavior or appearance of elements.
Directives in Angular?
+
Instructions to manipulate DOM., Types: Structural (*ngIf, *ngFor) and Attribute ([ngClass], [ngStyle]).
Directives?
+
Directives are instructions in templates to manipulate DOM., Types: Structural (*ngIf, *ngFor) and Attribute ([ngClass])., They modify appearance, behavior, or layout of elements.
Do I need a Routing Module always?
+
Not strictly, but recommended for modularity., Helps separate route configuration from main app module., Improves maintainability and scalability.
Do I need to bootstrap custom elements?
+
No, Angular Elements are self-bootstrapped using createCustomElement().
Do I still need entryComponents in Angular 9?
+
No, Ivy compiler handles dynamic and bootstrapped components automatically.
Do you perform error handling?
+
Use RxJS catchError or pipe with tap:, this.http.get('api').pipe(catchError(err => of([])));, Allows graceful fallback or logging.
Does Angular prevent HTTP-level vulnerabilities?
+
Angular provides HttpClient with built-in CSRF/XSRF support., Prevents common HTTP attacks if configured correctly., Additional server-side measures may still be required.
Does Angular support dynamic imports?
+
Yes, using import() syntax for lazy-loaded modules., Enables code splitting and reduces initial bundle size., Works seamlessly with Angular CLI and Webpack.
DOM sanitizer?
+
Service that cleans untrusted content before rendering., Used for HTML, styles, URLs, and resource URLs., Prevents script execution in Angular apps.
Dynamic components
+
Components created programmatically at runtime., Use ComponentFactoryResolver or ViewContainerRef.createComponent(), Useful for modals, tabs, or runtime content.
Dynamic forms
+
Forms created programmatically at runtime., Useful when form structure is not known at compile-time., Built using FormBuilder or reactive APIs.
Eager and Lazy loading?
+
Eager loading: Loads all modules at app startup., Lazy loading: Loads modules on demand, improving initial load time.
Editor support for Angular Language Service
+
Supported in VS Code, WebStorm, Sublime, and Atom., Provides autocompletion, quick info, error detection, and navigation in templates.
Enable binding expression validation?
+
Enable it via "strictTemplates": true in angularCompilerOptions., It validates property and event bindings in templates., Prevents runtime template errors and improves type safety.
Entry component?
+
Component instantiated dynamically, not referenced in template., Used in modals, dialogs, or dynamically created components.
EntryComponents array not necessary every time?
+
Angular 9+ uses Ivy compiler, which automatically detects required components., No manual entryComponents needed for dynamic components.
Event binding in Angular?
+
Event binding binds events from DOM elements to component methods using (event) syntax.
Exactly is a parameterized pipe?
+
A pipe that accepts arguments to modify output., Example: {{ birthday | date:'shortDate' }} where 'shortDate' is a parameter.
Exactly is the router state?
+
Router state is the current configuration and URL state of the Angular router., Includes active routes, parameters, query parameters, and route data.
Example of built-in validators
+
name: new FormControl('', [Validators.required, Validators.minLength(3)]), Applies required and minimum length validation.
Example of few metadata errors
+
Using arrow functions in decorators., Dynamic expressions in @Input() default values., Referencing non-static properties in metadata.
Examples of NgModules
+
BrowserModule, FormsModule, HttpClientModule, RouterModule
Feature modules?
+
NgModules created for specific functionality of an app., Helps in lazy loading, code organization, and reusability.
Features included in Ivy preview
+
Tree-shakable components, Faster compilation, Improved type checking in templates, Better build size optimization
Features of Angular 7
+
CLI prompts, virtual scrolling, drag & drop., Improved performance, updated RxJS 6.3., Better accessibility and dependency updates.
Features provided by Angular Language Service
+
Autocomplete for directives, components, and inputs, Error checking in templates, Quick info on variables and types, Navigation to component and template definitions
Find Angular CLI version
+
Run command: ng version or ng v in terminal., It shows Angular CLI, framework, and Node versions.
Folding?
+
Folding is the process of resolving expressions at compile time., Helps AOT replace constants and simplify templates.
forRoot helps avoid duplicate router instances
+
forRoot() ensures singleton services in shared modules., Lazy-loaded modules can use forChild() without duplicating router.
Four phases of template translation
+
1. Extraction - extract translatable strings., 2. Translation - provide translated text., 3. Merging - merge translations with templates., 4. Rendering - compile translated templates.
Generate a class in Angular 7 using CLI
+
Command: ng generate class my-class, Creates a TypeScript class file in project structure.
Get current direction for locales
+
Use Directionality service: dir.value returns 'ltr' or 'rtl'., Useful for layout adjustments in RTL languages.
Get the current route?
+
Use Angular ActivatedRoute or Router service., Example: this.route.snapshot.url or this.router.url., It provides access to route parameters, query params, and path info.
Give an example of attribute directives
+
Attribute directives change the appearance or behavior of DOM elements., Example:,

Highlight this text

, appHighlight is a custom attribute directive., Built-in examples: ngClass, ngStyle, ngModel.
Give an example of custom pipe
+
A custom pipe transforms data in templates., Example:, @Pipe({name: 'reverse'}), export class ReversePipe implements PipeTransform {, transform(value: string) { return value.split('').reverse().join(''); }, }, Usage: {{ 'Angular' | reverse }} → ralugnA.
Guard in Angular?
+
Guard is a service to control access to routes, e.g., CanActivate, CanDeactivate.
Happens if custom id is not unique
+
Angular may overwrite translations or throw errors., Unique IDs prevent conflicts and ensure correct mapping.
Happens if I import the same module twice?
+
Angular does not create duplicate services if a module is imported multiple times., Components and directives are available where declared., Providers are instantiated only once at root level.
Happens if you do not supply handler for the observer
+
No callback is executed; observable executes but subscriber ignores emitted values., No error or complete handling occurs.
Happens if you use script tag inside template?
+
Angular does not execute script tags in templates for security., Scripts are ignored to prevent XSS attacks., Use services or component logic instead.
Happens if you use the script tag within a template?
+
Scripts in Angular templates do not execute for security reasons (DOM sanitization)., Use external scripts or component logic instead.
HTTP interceptors?
+
HTTP interceptors are used to intercept HTTP requests and responses., They can modify headers, add tokens, or handle errors globally., Registered in Angular’s dependency injection system., Useful for logging, caching, and authentication.
Http Interceptors?
+
Classes that intercept HTTP requests and responses globally., Can modify headers, log activity, or handle errors., Implemented via HTTP_INTERCEPTORS token.
HttpClient and its benefits?
+
HttpClient is Angular’s service for HTTP communication., Supports typed responses, interceptors, and observables., Simplifies REST API calls with automatic JSON parsing.
HttpInterceptor in Angular?
+
Interceptor is a service to modify HTTP requests or responses globally.
Hydration?
+
Hydration converts server-rendered HTML into a fully interactive client app., Used in Angular Universal for SSR (Server-Side Rendering).
If BrowserModule used in feature module?
+
Error occurs: BrowserModule should only be imported in AppModule., Feature modules should use CommonModule instead.
Imported modules in CLI-generated feature modules
+
CommonModule for common directives., FormsModule if forms are used., RouterModule for routing inside the feature module.
Impure Pipes
+
Impure pipes may return different output even if input is same., Executed on every change detection cycle., Useful for dynamic or async data transformations.
Include SASS into an Angular project?
+
Install node-sass or use Angular CLI:, ng config schematics.@schematics/angular:component.style scss, Rename .css files to .scss., Angular compiles SASS into CSS automatically.
Index property in ngFor directive
+
let i = index gives the current iteration index., Can be used for numbering items or conditionally styling elements.
Inject dynamic script in Angular?
+
Use Renderer2 or document.createElement('script') in a component., Set src and append it to document.body., Ensure scripts are loaded after component initialization.
Install Angular Language Service in a project?
+
Use NPM: npm install @angular/language-service --save-dev., Also, enable it in your IDE (VS Code, WebStorm) for Angular templates.
Interpolation in Angular?
+
Interpolation allows embedding expressions in HTML using {{ expression }} syntax.
Interpolation?
+
Interpolation binds component data to HTML view using {{ }}., Example:

{{title}}

, Displays dynamic content in templates.
Invoke a builder?
+
In Angular, a builder is invoked via angular.json or the CLI., Use commands like ng build or ng run :., Builders handle tasks like building, serving, or testing projects., They are customizable via options in the angular.json configuration.
Is aliasing possible for inputs and outputs?
+
Yes, using @Input('aliasName') or @Output('aliasName')., Allows different property names externally vs internally.
Is bootstrapped component required to be entry component?
+
Yes, it must be included in entryComponents in Angular versions <9., In Angular 9+ (Ivy), entryComponents array is no longer needed.
Is it mandatory to use @Injectable on every service?
+
Only required if the service has dependencies injected., Recommended for consistency and AOT compatibility.
Is it safe to use direct DOM API methods?
+
No, direct DOM manipulation may bypass Angular security., It can introduce XSS risks., Prefer Angular templates, bindings, or Renderer2.
Is static flag mandatory for ViewChild?
+
static: true/false required when accessing child elements in ngOnInit vs ngAfterViewInit., true for early access, false for later lifecycle access.
It helps determine what component should be displayed.
+
Router links?, Router links ([routerLink]) are Angular directives to navigate between routes., Example: Home.
JIT?
+
JIT compiles Angular templates in the browser at runtime., Faster builds but slower app startup., Used mainly during development.
Key components of Angular
+
Component: UI + logic, Directive: Behavior or DOM manipulation, Module: Organizes components, Service: Shared logic/data, Pipe: Data transformation, Routing: Navigation between views
Lazy loading in Angular?
+
Lazy loading loads modules only when needed, improving performance.
Lazy loading?
+
Lazy loading loads modules only when needed., Reduces initial load time and improves performance., Configured in the routing module using loadChildren.
Lifecycle hooks available
+
Common hooks:, ngOnInit - after component initialization, ngOnChanges - on input property change, ngDoCheck - custom change detection, ngOnDestroy - cleanup before component removal
lifecycle hooks in Angular?
+
Lifecycle hooks are methods called at specific points in a component's life, e.g., ngOnInit, ngOnDestroy.
Lifecycle hooks in Angular? Examples?
+
Lifecycle hooks allow execution of logic at specific component stages. Common hooks include:, · ngOnInit() - initialization, · ngOnChanges() - when input properties change, · ngOnDestroy() - cleanup before removal, · ngAfterViewInit() - when view loads
Lifecycle hooks of a zone
+
onStable: triggered when zone has no pending tasks., onUnstable: triggered when async tasks start., onMicrotaskEmpty: after microtasks complete.
lifecycle hooks? Explain a few.
+
Lifecycle hooks are methods called at specific component stages., Examples:, ngOnInit: Initialization, ngOnChanges: Detect input changes, ngOnDestroy: Cleanup before destruction, They help manage component behavior.
Limitations with web workers
+
Cannot access DOM directly, Limited access to window or document objects, Cannot use Angular services directly, Communication is via messages only
List of template expression operators
+
+ - * / %, comparison (<> <=>= == !=), logical (&& || !), ternary (? :), nullish (?.) operators.
List pluralization categories
+
Angular supports: zero, one, two, few, many, other., Used in ICU plural expressions.
Macros?
+
Macros are predefined expressions or reusable snippets in Angular compilation., Used to simplify repeated patterns in metadata or templates.
Manually bootstrap an application
+
Use platformBrowserDynamic().bootstrapModule(AppModule) in main.ts., Starts Angular without relying on automatic bootstrapping.
Manually register locale data
+
Import locale from @angular/common and register:, import { registerLocaleData } from '@angular/common';, import localeFr from '@angular/common/locales/fr';, registerLocaleData(localeFr);
Mapping rules between Angular component and custom element
+
Component inputs → element attributes/properties, Component outputs → DOM events, Lifecycle hooks are preserved automatically
Metadata rewriting?
+
Metadata rewriting updates compiled metadata JSON files for AOT., Allows Angular to optimize templates and components at build time.
Metadata?
+
Metadata provides additional info about classes to Angular., Used via decorators like @Component and @NgModule., Tells Angular how to process a class.
Method decorators?
+
Decorators applied to methods to modify or enhance behavior., Example: @HostListener listens to events on host elements.
Methods of NgZone to control change detection
+
run(): execute inside Angular zone (triggers detection)., runOutsideAngular(): execute outside detection., onStable, onUnstable for subscriptions.
Module in Angular?
+
Modules group components, directives, pipes, and services into cohesive blocks of functionality.
Module?
+
Module (NgModule) organizes components, directives, and services., Every Angular app has a root module (AppModule)., Modules help in lazy loading and modular development.
Multicasting?
+
Multicasting allows sharing a single observable execution among multiple subscribers., Achieved using Subject or share() operator., Reduces unnecessary API calls or processing.
MVVM Architecture
+
Model-View-ViewModel separates UI, logic, and data., Model: Data and business logic., View: User interface., ViewModel: Mediator between view and model, handles commands and data binding., Promotes testability and clean separation of concerns.
Navigating between routes in Angular
+
Use RouterLink or Router service:, Home, Or programmatically: this.router.navigate(['/home']);
NgAfterContentInit in Angular?
+
ngAfterContentInit is called after content projected into component is initialized.
NgAfterViewInit in Angular?
+
ngAfterViewInit is called after component's view and child views are initialized.
Ngcc
+
Angular Compatibility Compiler converts node_modules packages compiled with View Engine to Ivy., Ensures libraries are compatible with Angular Ivy compiler.
Ng-content and its purpose?
+
is a placeholder in a component template., Used for content projection, letting parent content be rendered in child components.
NgModule in Angular?
+
NgModule is a decorator that defines a module and its metadata, like declarations, imports, providers, and bootstrap.
NgOnDestroy in Angular?
+
ngOnDestroy is called just before component destruction to clean up resources.
NgOnInit in Angular?
+
ngOnInit is called once after component initialization.
NgOnInit?
+
ngOnInit is a lifecycle hook called after Angular initializes a component., Used to perform component initialization and fetch data., Runs once per component instantiation.
NgRx?
+
NgRx is a state management library for Angular., Based on Redux pattern, uses actions, reducers, and store., Helps manage complex application state predictably.
NgUpgrade?
+
NgUpgrade allows hybrid apps running AngularJS and Angular together., Facilitates incremental migration from AngularJS to Angular., Supports components, services, and routing interoperability.
NgZone
+
NgZone is a service that manages Angular’s change detection context., It runs code inside or outside Angular zone to control updates efficiently.
Non-null type assertion operator?
+
The ! operator asserts that a value is not null or undefined., Example: value!.length tells TypeScript the variable is safe., Used to prevent compiler errors when you know the value exists.
NoopZone
+
A no-operation zone that disables automatic change detection., Useful for performance optimization in large apps.
Observable creation functions
+
of() - emits given values, from() - converts array, promise to observable, interval() - emits sequence periodically, fromEvent() - listens to DOM events
Observable in Angular?
+
Observable represents a stream of asynchronous data that can be subscribed to.
Observable?
+
Observable is a stream of data over time., It can emit next, error, and complete notifications., Used for HTTP, events, and async tasks.
Observables different from promises?
+
Observables can emit multiple values over time, promises only one., Observables are lazy and cancellable., Promises are eager and simpler., Observables support operators for transformation and filtering.
Observables vs Promises
+
Observables: Multiple values over time, cancellable, lazy evaluation., Promises: Single value, eager, not cancellable., Observables are used with RxJS in Angular.
observables?
+
Observables are data streams that emit values over time., They allow asynchronous operations like HTTP requests or events., Provided by RxJS in Angular.
Observer?
+
An observer is an object that listens to an observable., It has methods: next, error, and complete., Example: { next: x => console.log(x), error: e => console.log(e) }.
Operators in RxJS?
+
Operators are functions to transform, filter, or combine Observables, e.g., map, filter, mergeMap.
Optimize performance of async validators
+
Use debounceTime to reduce API calls., Use distinctUntilChanged for unique inputs., Avoid heavy computation inside validator function.
Option to choose between inline and external template file
+
In @Component decorator:, template - inline HTML, templateUrl - external HTML file, Choice depends on component size and readability., *21. Purpose of ngFor directive, *ngFor is used to loop over a collection and render elements., Syntax: *ngFor="let item of items"., Useful for dynamic lists and tables., *22. Purpose of ngIf directive, *ngIf conditionally renders elements based on boolean expression., Removes or adds elements from the DOM., Helps control UI dynamically.
Optional dependency
+
A dependency that may or may not be provided., Use @Optional() decorator in constructor injection.
Parameterized pipe?
+
Pipes that accept arguments to modify output., Example: {{ amount | currency:'USD':true }}, Allows flexible data formatting in templates.
Parent to Child data sharing example
+
Parent Component:, , Child Component:, @Input() childData: string;, This passes parentData from parent to child.
Pass headers for HTTP client?
+
Use HttpHeaders in Angular’s HttpClient., Example:, this.http.get(url, { headers: new HttpHeaders({'Auth':'token'}) }), Allows sending authentication, content-type, or custom headers.
Perform error handling in observables?
+
Use catchError operator inside .pipe()., Example: observable.pipe(catchError(err => of(defaultValue))), Can also use retry() to retry failed requests.
Pipe in Angular?
+
Pipe transforms data in templates, e.g., date, currency, custom pipes.
Pipes in Angular?
+
Pipes transform data before displaying in a template., Example: {{ name | uppercase }} converts text to uppercase., Can be built-in or custom.
Pipes?
+
Pipes transform data in the template without changing the component., Example: {{date | date:'short'}}, Angular has built-in pipes like DatePipe, UpperCasePipe, CurrencyPipe.
PipeTransform Interface
+
Interface that custom pipes must implement., Defines the transform() method for input-to-output transformation., Enables reusable data formatting.
platform in Angular?
+
Platform provides runtime context for Angular applications., Examples: platformBrowser(), platformServer()., It bootstraps the Angular application on the respective environment.
Possible data update scenarios for change detection
+
Model updates via property binding, User input in forms, Async operations like HTTP requests, timers, Manual triggering using ChangeDetectorRef
Possible errors with declarations
+
Declaring a component twice in different modules, Declaring non-component classes, Missing component import in module
Precedence between pipe and ternary operators
+
Ternary operators have higher precedence., Pipe (|) executes after ternary expression evaluates.
Prevent automatic sanitization
+
Use Angular DomSanitizer to mark content as trusted:, bypassSecurityTrustHtml, bypassSecurityTrustUrl, etc., Use carefully to avoid XSS vulnerabilities.
Prioritize TypeScript over JavaScript in Angular?
+
TypeScript provides strong typing, classes, interfaces, and compile-time checks., Improves developer productivity and maintainability.
Property binding in Angular?
+
Property binding binds component properties to HTML element properties using [property] syntax.
Property decorators?
+
Decorators that enhance class properties with Angular features., Example: @Input() for parent-to-child binding, @Output() for event emission.
Protractor?
+
Protractor is an end-to-end testing framework for Angular apps., It runs tests in real browsers and integrates with Selenium., It understands Angular-specific elements like ng-model and ng-repeat.
Provide a singleton service
+
Use @Injectable({ providedIn: 'root' })., Angular injects one instance app-wide., Do not redeclare in feature modules to avoid duplicates.
Provide build configuration for multiple locales
+
Use angular.json configurations:, "locales": { "fr": "src/locale/messages.fr.xlf" }, Build with: ng build --localize.
Provide configuration inheritance?
+
Angular modules can extend or import other modules., Child modules inherit providers, declarations, and configurations from parent modules., Helps maintain shared settings across the app.
Provider?
+
A provider tells Angular how to create a service., It defines the dependency injection configuration., Declared in modules, components, or services.
Pure Pipes
+
Pure pipes return same output for same input., Executed only when input changes., Used for performance optimization.
Purpose of tag
+
Specifies the base path for relative URLs in an Angular app., Helps router resolve paths correctly., Placed in the section of index.html., Example: .
Purpose of animate function
+
animate() specifies duration, timing, and styles for transitions., It animates the element from one style to another., Used inside transition() to control animation flow.
Purpose of any type cast function?
+
The any type allows bypassing TypeScript type checking., It is used to temporarily cast a variable when type is unknown., Useful during migration or working with dynamic data.
Purpose of async pipe
+
async pipe automatically subscribes to Observable or Promise., It updates the template with emitted values., Handles subscription and unsubscription automatically.
Purpose of CommonModule?
+
CommonModule provides common directives like ngIf and ngFor., It is imported in feature modules to use standard Angular directives., Helps avoid reimplementing basic functionality.
Purpose of custom id
+
Assigns a unique identifier to a translatable string., Helps maintain consistent translations across builds.
Purpose of differential loading in CLI
+
Generates two bundles: modern ES2015+ for new browsers, ES5 for old browsers., Reduces payload for modern browsers., Improves performance and load time.
Purpose of FormBuilder
+
Simplifies creation of FormGroup, FormControl, and FormArray., Reduces boilerplate code for reactive forms.
Purpose of hidden property
+
[hidden] toggles visibility of an element using CSS display: none., Unlike ngIf, it does not remove the element from the DOM.
Purpose of i18n attribute
+
Marks an element or text for translation., Angular extracts these for generating translation files.
Purpose of innerHTML
+
innerHTML sets or gets the HTML content of an element., Used for dynamic HTML rendering in the DOM.
Purpose of metadata JSON files
+
Store compiled metadata about components, directives, and modules., Used by AOT compiler for dependency injection and code generation.
Purpose of ngFor trackBy
+
trackBy improves performance by tracking items using unique identifier., Prevents unnecessary DOM re-rendering when lists change.
Purpose of ngSwitch directive
+
ngSwitch conditionally displays elements based on expression value., ngSwitchCase and ngSwitchDefault define cases and default view.
Purpose of Wildcard route
+
Wildcard route (**) catches all undefined routes., Typically used for 404 pages., Example: { path: '**', component: PageNotFoundComponent }.
Reactive forms
+
Form model is defined in component class using FormControl, FormGroup., Provides predictable, programmatic control and validators.
Reason for No provider for HTTP exception
+
Occurs when HttpClientModule is not imported in AppModule., Add HttpClientModule to imports to resolve dependency injection errors.
Reason to deprecate Web Tracing Framework
+
It was browser-dependent and complex., Angular adopted modern debugging tools and console-based tracing., Simplifies performance monitoring and reduces maintenance.
Reason to deprecate web worker packages
+
Native Web Worker APIs became standardized., Angular moved to simpler, built-in worker support., External packages were redundant and increased bundle size.
Recommendation for provider scope
+
Provide services in root for singleton usage., Avoid multiple registrations in lazy-loaded modules unless necessary., Use feature module providers for module-scoped instances.
ReplaySubject in Angular?
+
ReplaySubject emits a specified number of previous values to new subscribers.
Report missing translations
+
Angular logs missing translations in console during compilation., Use tools or custom loaders to handle untranslated keys.
Reset the form
+
Use form.reset() to reset values and validation state., Optionally, pass default values: form.reset({ name: 'John' }).
Restrict provider scope to a module
+
Declare the provider in the providers array of the module., Avoid providedIn: 'root' in @Injectable()., This creates a module-specific instance.
Restrictions of metadata
+
Cannot use dynamic expressions in decorators., Arrow functions or complex expressions are not allowed., Only static, serializable values are permitted.
Restrictions on declarable classes
+
Declarables cannot be services or modules., They must be declared in exactly one NgModule., Cannot be imported multiple times across modules.
Role of ngModule metadata in compilation process
+
Defines components, directives, pipes, and services., Helps compiler resolve dependencies and build module graph.
Role of template compiler for XSS prevention
+
The compiler escapes unsafe content during template rendering., Ensures dynamic content does not execute scripts., Acts as a first-line defense against XSS.
Root module in Angular?
+
The AppModule is the root module bootstrapped to launch the application.
Route Parameters?
+
Data passed through URLs to routes., Path parameters: /user/:id, Query parameters: /user?id=1, Fragment: #section1, Matrix parameters: /user;id=1
Routed entry component?
+
Component loaded via router dynamically, not referenced in template., Needs to be known to Angular compiler to generate factory.
Router events?
+
Router events are lifecycle events during navigation., Examples: NavigationStart, RoutesRecognized, NavigationEnd, NavigationError., You can subscribe to Router.events for tracking navigation.
Router imports?
+
To use routing, import:, RouterModule, Routes from @angular/router, Then configure routes using RouterModule.forRoot(routes) or forChild(routes).
Router links?
+
[routerLink] is used for navigation without page reload., Example: Home, It generates URLs based on route configuration.
Router outlet?
+
is a placeholder where routed components are displayed., The router dynamically injects the matched component here., Only one per view or multiple for nested routes.
Router state?
+
Router state represents the current tree of activated routes., Provides access to route parameters, query parameters, and data., Useful for inspecting the current route in the app.
Router state?
+
Router state represents current route information., Contains URL, params, queryParams, and component data., Accessible via Router or ActivatedRoute service.
RouterModule in Angular?
+
RouterModule provides services and directives for configuring routing.
Routing in Angular?
+
Routing enables navigation between different views in a single-page application.
Rule in Schematics?
+
A rule defines transformations on a project tree., It decides how files are created, modified, or deleted., Rules are building blocks of schematics.
Run Bazel directly?
+
Use Bazel CLI commands: bazel build //src:app or bazel test //src:app., It executes targets defined in BUILD files., Helps in running incremental builds independently of Angular CLI.
RxJS in Angular?
+
RxJS is a reactive programming library for handling asynchronous data streams using Observables.
RxJS in Angular?
+
RxJS is a library for reactive programming., Used with observables to handle async data, events, and streams., Provides operators like map, filter, and debounceTime.
RxJS Subject in Angular?
+
Subject is an observable that multicasts values to multiple observers., It can act as both an observer and observable., Used for communication between components or services.
RxJS?
+
RxJS (Reactive Extensions for JavaScript) is a library for reactive programming., Provides observables, operators, and subjects., Used for async tasks and event handling in Angular.
safe navigation operator?
+
?. operator prevents null or undefined errors in templates., Example: user?.name returns undefined if user is null.
Sanitization? Does Angular support it?
+
Sanitization cleans untrusted input to prevent code injection., Angular provides built-in DomSanitizer for HTML, styles, URLs, and scripts.
Schematic?
+
Schematics are code generators for Angular projects., They automate creation of components, services, modules, or custom templates., Used with Angular CLI.
Schematics CLI?
+
Command-line tool to run, test, and create schematics., Example: schematics blank --name=my-schematic., Helps automate repetitive tasks in Angular projects.
Scope hierarchy in Angular
+
Angular components have isolated scopes with hierarchical injectors., Child components inherit parent services via DI.
Scope in Angular
+
Scope is the binding context between controller and view., Used in AngularJS; replaced by Component class properties in Angular.
Security principles in Angular
+
Follow XSS prevention, CSRF protection, input validation, and sanitization., Avoid direct DOM manipulation and unsafe URL usage., Use Angular built-in sanitizers and HttpClient.
Select an element in component template?
+
Use template reference variables or @ViewChild() decorator., Example: @ViewChild('myDiv') myDivElement: ElementRef;., This allows accessing DOM elements or child components from the component class.
Select an element within a component template?
+
Use @ViewChild() or @ViewChildren() decorators., Example: @ViewChild('myDiv') div: ElementRef;, Allows access to DOM elements or child components in TS code.
select ICU expression
+
Used for conditional translations based on variable values., Example: gender-based messages: {gender, select, male {...} female {...} other {...}}
Server-side XSS protection in Angular
+
Validate and sanitize inputs before sending to client., Use CSP headers, HTTPS, and server-side escaping., Combine with Angular client-side protections.
Service in Angular?
+
Service is a class that provides shared functionality across components.
Service Worker and its role in Angular?
+
Service Worker is a background script that intercepts network requests., It enables offline caching, push notifications, and performance improvements., Angular supports Service Worker via @angular/pwa package.
Service?
+
Service is a class that holds business logic or shared data., Injected into components using Dependency Injection., Promotes code reusability across components.
Services in Angular?
+
Reusable classes that hold business logic or shared data., Injected into components via DI., Helps separate UI and logic.
Set ngFor and ngIf on same element
+
Use :,
{{item}}
, Prevents structural directive conflicts.
Share data between components in Angular?
+
Parent-to-child: @Input(), Child-to-parent: @Output() with EventEmitter, Service with BehaviorSubject or Subject for unrelated components
Share services using modules?
+
Yes, but use Core module or providedIn: 'root'., Avoid providing in Shared module to prevent multiple instances.
Shared module
+
A module containing reusable components, directives, pipes, and services., Imported by other modules to reduce code duplication., Typically does not provide singleton services.
Shorthand notation for subscribe method
+
Instead of an observer object, use separate callbacks:, observable.subscribe(val => console.log(val), err => console.log(err), () => console.log('complete'));
Single Page Applications (SPA)
+
SPA loads one HTML page and dynamically updates content., Routing is handled on the client side., Improves speed and reduces server load.
Slice pipe?
+
Slice pipe extracts a subset of array or string., Example: {{ items | slice:0:3 }} shows first 3 items., Useful for pagination or previews.
Some features of Angular
+
Component-based architecture., Two-way data binding and dependency injection., Directives, services, and RxJS support., Powerful CLI for project scaffolding.
SPA? (Single Page Application)
+
A SPA loads a single HTML page and dynamically updates content using JavaScript without full page reloads. Unlike traditional websites where each action loads a new page, SPAs improve speed, user experience, and reduce server load.
Special configuration for Angular 9?
+
Angular 9 uses Ivy compiler by default., No additional configuration is needed for most apps.
Specify Angular template compiler options?
+
Template compiler options are specified in tsconfig.json or angular.json., You can enable strict type checking, full template type checking, and other options., Example: "angularCompilerOptions": { "strictTemplates": true }., It helps catch template errors at compile time.
Standalone component?
+
A component that does not require a module., Can be used independently with its own imports, providers, and declarations.
State CSS classes provided by ngModel
+
ng-valid, ng-invalid, ng-dirty, ng-pristine, ng-touched, ng-untouched, Helps style form validation states.
State function?
+
state() defines a named state for an animation., It specifies styles associated with that state., Used in combination with transition() to animate between states.
Steps to use animation module
+
1. Install @angular/animations., 2. Import BrowserAnimationsModule in the root module., 3. Use trigger, state, style, animate, and transition in components., 4. Bind animations to templates using [ @triggerName ].
Steps to use declaration elements
+
1. Declare component, directive, or pipe in NgModule., 2. Export if needed for other modules., 3. Import module in consuming module., 4. Use element in template.
string interpolation and property binding.
+
String interpolation: {{ value }} inserts data into templates., Property binding: [property]="value" binds data to element properties., Both keep view and data synchronized.
String interpolation in Angular?
+
Binding data from component to template using {{ value }}., Automatically updates the DOM when the component value changes.
Style function?
+
style() defines CSS styles to apply in a particular state or keyframe., Used inside state(), transition(), or animate()., Example: style({ opacity: 0, transform: 'translateX(-100%)' }).
Subject in Angular?
+
Subject is an Observable that allows multicasting to multiple subscribers.
Subscribing?
+
Subscribing is listening to an observable., Example: .subscribe(data => console.log(data));, Triggers execution and receives emitted values.
Template expressions?
+
Template expressions are evaluated inside interpolation or binding., Can include properties, methods, operators., Cannot contain statements like loops or conditionals.
Template statements?
+
Template statements handle events like (click) or (change)., Invoke component methods in response to user actions., Example:
Template?
+
Template is the HTML view of a component., It defines structure, layout, and binds data using Angular syntax., Can include directives, bindings, and pipes.
Template-driven forms
+
Forms defined directly in HTML template using ngModel., Less control but simpler for small forms.
Templates in Angular
+
Templates define the HTML view of a component., They can contain Angular directives, bindings, and expressions., Templates are combined with component logic to render the UI.
Templates in Angular?
+
HTML with Angular directives, bindings, and components., Defines the view for a component.
Test Angular application using CLI?
+
Use ng test to run unit tests with Karma and Jasmine., Use ng e2e for end-to-end testing with Protractor or Cypress., CLI manages configurations and test runner setup automatically.
TestBed?
+
TestBed is Angular’s unit testing utility for configuring and initializing environment., It allows creating components, services, and modules in isolation., Used with Karma or Jasmine to run tests.
Three phases of AOT
+
1. Metadata analysis: Parse decorators and template metadata., 2. Template compilation: Convert templates to TypeScript code., 3. Code generation: Emit optimized JavaScript for the browser.
Transfer components to custom elements
+
Use createCustomElement(Component, { injector }), Register via customElements.define('tag-name', element).
Transition function?
+
transition() defines how animations move between states., It specifies conditions, duration, and easing for the animation., Example: transition('open => closed', animate('300ms ease-in')).
Translate an attribute
+
Add i18n-attribute to mark element attributes: Welcome,
Translate text without creating an element
+
Use i18n attribute on existing elements or directives., Angular supports inline translations for text content.
Transpiling in Angular?
+
Transpiling converts TypeScript or modern JavaScript into plain JavaScript., This ensures compatibility with browsers., Angular uses the TypeScript compiler (tsc) for this process., It helps leverage ES6+ features safely in older browsers.
Trigger an animation
+
Use Angular Animation API: trigger, state, transition, animate., Call animation in template with [@animationName]., Can also trigger via component methods.
Two-way binding in Angular?
+
Two-way binding synchronizes data between component and template using [(ngModel)].
Two-way data binding
+
Updates component model when view changes and vice versa., Implemented using [(ngModel)]., Simplifies form handling.
Type narrowing?
+
Type narrowing is the process of refining a variable’s type., TypeScript uses control flow analysis like if, typeof, or instanceof., Example: if (typeof x === "string") { x.toUpperCase(); }
Types of data binding in Angular?
+
Interpolation, Property Binding, Event Binding, Two-way Binding ([(ngModel)]).
Types of directives in Angular?
+
Components, Structural Directives (e.g., *ngIf, *ngFor), and Attribute Directives (e.g., ngClass, ngStyle).
Types of feature modules
+
Eager-loaded modules: Loaded at app startup., Lazy-loaded modules: Loaded on demand via routing., Shared modules: Contain reusable components, directives, pipes., Core module: Provides singleton services.
Types of filters in AngularJS.
+
Filters format data displayed in the UI. Common filters include:, ✓ currency (formats currency), ✓ date (formats date), ✓ filter (filters arrays), ✓ uppercase/lowercase, ✓ orderBy (sorts collections),
Types of injector hierarchies
+
Root injector, Module-level injector, Component-level injector, Child injectors inherit from parent injector.
Types of validator functions
+
Synchronous validators (Validators.required, Validators.minLength), Asynchronous validators (HTTP-based or custom async checks)
Type-safe TestBed API changes in Angular 9
+
TestBed APIs now return strongly typed component and fixture instances., Improves type checking in unit tests.
TypeScript class with constructor and function
+
class Person {, constructor(public name: string) {}, greet() { console.log(`Hello ${this.name}`); }, }, let p = new Person("John");, p.greet();
TypeScript?
+
TypeScript is a superset of JavaScript that adds static typing., It compiles down to plain JavaScript for browser compatibility., Provides features like classes, interfaces, and type checking., Used extensively in Angular for better maintainability and scalability.
Update specific properties of a form model
+
Use patchValue() for partial updates., setValue() requires all properties to be updated., Example: form.patchValue({ name: 'John' }).
Upgrade Angular version?
+
Use ng update @angular/core @angular/cli., Follow migration guides for breaking changes., CLI updates dependencies, TypeScript, and configuration automatically.
Upgrade location service of AngularJS?
+
Migrate $location service to Angular’s Router module., Update code to use Router.navigate() or ActivatedRoute., Ensures smooth URL and state management in Angular.
Use any JavaScript feature in expression syntax for AOT?
+
No, only static and serializable expressions are allowed., Dynamic or runtime JavaScript features are rejected.
Use AOT compilation with Ivy?
+
Yes, Ivy fully supports AOT (Ahead-of-Time) compilation., It improves startup performance and catches template errors at compile time.
Use arrow functions in AOT?
+
No, arrow functions are not allowed in decorators or metadata., AOT requires static, serializable expressions.
Use Bazel with Angular CLI?
+
Install Bazel schematics: ng add @angular/bazel., Build or test projects using Bazel commands: ng build --bazel., It replaces default Webpack builder for performance optimization.
Use HttpClient with an example
+
Inject HttpClient in a service:, this.http.get ('api/users').subscribe(data => console.log(data));, Use .get, .post, .put, .delete for REST calls., Returns observable streams.
Use interceptor for entire application
+
Provide it in AppModule providers:, providers: [{ provide: HTTP_INTERCEPTORS, useClass: MyInterceptor, multi: true }], Ensures all HTTP requests pass through it.
Use jQuery in Angular?
+
Install jQuery via npm: npm install jquery., Import it in angular.json scripts or component: import * as $ from 'jquery';., Use carefully; prefer Angular templates over direct DOM manipulation.
Use polyfills in Angular application?
+
Modify polyfills.ts file to enable browser compatibility., Includes support for older browsers (IE, Edge)., Polyfills ensure Angular features work across different platforms.
Use SASS in Angular project?
+
Set --style=scss when creating project: ng new app --style=scss., Or change file extensions to .scss and configure angular.json., Angular CLI automatically compiles SASS to CSS.
Utility functions provided by RxJS
+
Functions like of, from, interval, timer, throwError, and fromEvent., Used to create or manipulate observables.
Various kinds of directives
+
Structural: *ngIf, *ngFor - modify DOM structure, Attribute: [ngStyle], [ngClass] - change element behavior/appearance, Custom directives: User-defined behaviors
Various security contexts in Angular
+
HTML (content in templates), Style (CSS binding), Script (JavaScript context), URL (resource links), Resource URL (external resources)
Verify model changes in forms
+
Subscribe to valueChanges or statusChanges on form or controls., Example: form.valueChanges.subscribe(val => console.log(val)).
view encapsulation in Angular?
+
Controls CSS scope in components., Types: Emulated (default), None, Shadow DOM., Prevents styles from leaking or being overridden.
ViewEncapsulation? Types?
+
ViewEncapsulation controls styling scope in Angular components., It has three modes:, · Emulated (default, scoped styles), · None (global styles), · ShadowDom (real Shadow DOM isolation)
Ways to control AOT compilation
+
Enable/disable in angular.json using "aot": true/false., Use CLI commands: ng build --aot., Manage template metadata and decorators carefully.
Ways to remove duplicate service registration
+
Provide service only in root., Avoid lazy-loaded module providers for shared services., Use forRoot pattern for modules with services.
Ways to trigger change detection in Angular
+
User events (click, input) automatically trigger detection., ChangeDetectorRef.detectChanges() manually triggers detection., NgZone.run() executes code inside Angular zone., Async operations via Observables or Promises also trigger it.
Workspace APIs?
+
Workspace APIs allow managing Angular projects programmatically., Used for creating, modifying, or generating projects and configurations., Part of Angular DevKit (@angular-devkit/core).
Zone context
+
The environment that monitors async operations., Angular uses it to know when to run change detection.
Zone?
+
Zone.js is a library used by Angular to detect asynchronous operations., It helps Angular trigger change detection automatically., All async tasks like setTimeout, promises, and HTTP requests are tracked.

API First Architecture

+
Advantages of api first?
+
Improves consistency, reduces rework, enables early integration, supports microservices and multi-platform clients.
Api contract?
+
A formal definition of endpoints, request/response formats, data types, and authentication mechanisms, usually via OpenAPI/Swagger.
Api first architecture?
+
Designs the API before implementing business logic, ensuring consistency, reusability, and collaboration with front-end and third-party teams.
Api first supports microservices?
+
APIs act as contracts between services, enabling independent development, testing, and deployment.
Api gateway?
+
A gateway handles routing, authentication, rate-limiting, and logging for microservice APIs.
Diffbet api first and code-first design?
+
API-first designs API before coding, focusing on contracts. Code-first generates APIs from implementation, which may lack consistency.
Diffbet rest and graphql?
+
Rest exposes fixed endpoints; graphql allows clients to query exactly what they need. both can follow api-first design.
Openapi (swagger)?
+
A specification for defining REST APIs, including endpoints, payloads, responses, and authentication, supporting documentation and code generation.
To handle security in api-first design?
+
Use OAuth2, JWT, API keys, TLS/HTTPS, and input validation.
Versioning in api design?
+
Maintains backward compatibility while introducing new features, often via URL or header versioning.

Api Gateways Explained

+
What an API Gateway?
+
Entry point backend APIs Think of it as a reverse proxy added features
+
API Gateway Authentication Token-based authentication
+
API Gateway Authentication Token-based authentication Cookie-based authentication YARP integrates with the ASP.NET Core authN & authZ mechanism. You can specify the auth policy for each route. There are two premade policies: anonymous and default Custom policies are also >supported. Popular API Gateways Reverse proxying Request routing Load balancing AuthN + AuthZ Popular tools that can serve as API >Gateways YARP OCELOT TRAEFIK

APIs

+
Api aggregation?
+
API aggregation merges data from multiple APIs into a single response.
Api authentication vs authorization?
+
Authentication verifies identity; authorization defines access permissions.
Api authentication?
+
API authentication verifies the identity of the client accessing the API.
Api authorization?
+
API authorization determines what resources or actions an authenticated client is allowed to access.
Api backward compatibility?
+
Ensuring that changes in API do not break existing clients using older versions.
Api caching?
+
API caching stores responses temporarily to reduce load and improve performance.
Api client?
+
An API client is a program or application that sends requests to an API and processes responses.
Api contract?
+
An API contract defines the expected request/response format headers status codes and behavior.
Api cors policy?
+
CORS policy restricts cross-origin requests for security allowing only permitted domains to access the API.
Api deprecation?
+
API deprecation is the process of marking an API or feature as obsolete and guiding clients to use alternatives.
Api documentation?
+
API documentation provides instructions endpoints parameters and examples for using an API.
Api endpoint testing?
+
Endpoint testing verifies that each API endpoint functions correctly and returns expected responses.
Api gateway?
+
An API gateway is a single entry point for multiple APIs that handles routing authentication and monitoring.
Api health check?
+
API health check monitors API status to ensure it is up responsive and functioning correctly.
Api idempotency key?
+
An idempotency key prevents duplicate processing of the same request.
Api latency?
+
API latency is the time taken for a request to travel from client to server and receive a response.
Api lifecycle?
+
API lifecycle includes design development testing deployment monitoring versioning and retirement.
Api load balancing?
+
Load balancing distributes incoming API requests across multiple servers to ensure availability and performance.
Api logging?
+
API logging records requests responses and events for debugging auditing and analytics.
Api mocking?
+
API mocking simulates API responses without the actual backend implementation for testing purposes.
Api monitoring tool?
+
Tools like Postman New Relic or Datadog track API performance uptime and errors.
Api orchestration vs aggregation?
+
Orchestration coordinates multiple API calls to complete a workflow; aggregation merges multiple API responses into one.
Api orchestration?
+
API orchestration combines multiple API calls into a single workflow to complete complex tasks.
Api proxy?
+
An API proxy is an intermediary that forwards API requests to backend services often used for security and routing.
Api rate limiting strategy?
+
Rate limiting strategies include token bucket fixed window sliding window and leaky bucket algorithms.
Api rate limiting window?
+
Rate limiting window defines the time interval in which the maximum requests are counted.
Api response time?
+
API response time is the duration between request submission and response reception.
Api sandbox?
+
API sandbox is a testing environment that simulates API behavior without affecting production.
Api security?
+
API security protects APIs from unauthorized access attacks and misuse.
Api server?
+
An API server handles incoming requests from clients processes them and returns responses.
Api testing?
+
API testing verifies that APIs work as expected including functionality performance and security.
Api throttling in cloud?
+
In cloud API throttling prevents excessive requests to ensure fair usage and system stability.
Api throttling limit?
+
Throttling limit defines the maximum allowed requests per time window.
Api throttling pattern?
+
The throttling pattern limits excessive API calls to prevent system overload.
Api throttling vs caching?
+
Throttling limits request rate; caching stores frequent responses to improve performance.
Api throttling vs quota?
+
Throttling limits request rate; quota defines maximum allowed usage over a longer period.
Api throttling vs rate limiting?
+
Throttling controls the number of requests over time; rate limiting restricts requests per client or IP.
Api tokens?
+
API tokens are credentials used to authenticate and authorize API requests.
Api versioning best practice?
+
Best practice: include version in URL (e.g. /v1/resource) or header to maintain backward compatibility.
Api versioning?
+
API versioning allows maintaining multiple versions of an API to ensure backward compatibility.
Api?
+
An API (Application Programming Interface) is a set of rules that allows software applications to communicate with each other.
Cors?
+
CORS (Cross-Origin Resource Sharing) is a security feature that allows or restricts resource requests from different domains.
Diffbet rest and soap?
+
REST is lightweight stateless and uses HTTP; SOAP is protocol-based heavier and uses XML messages.
Diffbet synchronous and asynchronous apis?
+
Synchronous APIs wait for a response immediately; asynchronous APIs return immediately and process in the background.
Endpoint in apis?
+
An endpoint is a specific URL where an API can access resources or perform operations.
Explain api client sdk.
+
API client SDK is a prebuilt library that helps developers interact with an API using language-specific methods.
Explain api gateway vs reverse proxy.
+
API gateway manages routing security and monitoring for APIs; reverse proxy forwards client requests to servers.
Explain api idempotency vs retry.
+
Idempotency ensures repeated requests have no extra effect; retry may resend requests safely using idempotency keys.
Explain api key authentication.
+
API key authentication uses a unique key provided to clients to access the API.
Explain api load testing.
+
API load testing evaluates performance under heavy usage to identify bottlenecks and ensure scalability.
Explain api mocking vs stubbing.
+
Mocking simulates API behavior for testing; stubbing provides fixed responses for predefined inputs.
Explain api monitoring.
+
API monitoring tracks availability performance errors and usage patterns to ensure reliability.
Explain api pagination.
+
Pagination splits large API responses into smaller manageable chunks for efficient data transfer.
Explain api request headers.
+
Request headers carry metadata like authentication tokens content type and caching instructions.
Explain api response codes 2xx
+
4xx 5xx. 2xx = success 4xx = client error 5xx = server error.
Explain api security best practices.
+
Use authentication authorization HTTPS input validation rate limiting and logging to secure APIs.
Explain api testing types.
+
Types include functional performance security integration and contract testing.
Explain api throttling algorithm.
+
Algorithms include fixed window sliding window token bucket and leaky bucket to control request rates.
Explain api versioning strategies.
+
Strategies: URI versioning (/v1/resource) request header versioning query parameter versioning (?version=1).
Explain endpoint security.
+
Endpoint security ensures that each API endpoint is protected using authentication authorization and encryption.
Explain oauth scopes.
+
OAuth scopes define the permissions and access level granted to a client application.
Explain oauth.
+
OAuth is an authorization framework that allows third-party applications limited access to user resources without exposing credentials.
Explain rate limit headers.
+
Rate limit headers indicate remaining requests and reset time to clients for API usage management.
Explain rate-limiting vs throttling.
+
Rate-limiting controls API usage over time; throttling limits request rate per user or session.
Explain response codes in rest.
+
Common HTTP response codes include 200 (OK) 201 (Created) 400 (Bad Request) 401 (Unauthorized) 404 (Not Found) 500 (Server Error).
Explain rest api vs graphql.
+
REST uses multiple endpoints for resources; GraphQL uses a single endpoint allowing flexible queries.
Explain rest api vs rpc.
+
REST API is resource-based with standard HTTP methods; RPC (Remote Procedure Call) executes functions/methods on a remote server.
Explain rest constraints.
+
REST constraints include client-server statelessness cacheability layered system code-on-demand (optional) and uniform interface.
Explain restful status codes.
+
Status codes indicate API response results: 200 (OK) 201 (Created) 400 (Bad Request) 401 (Unauthorized) 404 (Not Found) 500 (Server Error).
Explain the diffbet put and patch.
+
PUT updates a resource entirely; PATCH updates only specified fields.
Graphql?
+
GraphQL is a query language for APIs that allows clients to request exactly the data they need.
Hateoas?
+
HATEOAS (Hypermedia as the Engine of Application State) is a REST principle where responses include links to related actions.
Hmac authentication?
+
HMAC authentication uses a hash-based message authentication code to verify request integrity and authenticity.
Http methods used in rest?
+
Common HTTP methods are GET POST PUT DELETE PATCH and OPTIONS.
Idempotency in apis?
+
Idempotency ensures that multiple identical requests produce the same result without side effects.
Idempotent api method?
+
An idempotent method (GET PUT DELETE) produces the same result even if called multiple times.
Jwt?
+
JWT (JSON Web Token) is a compact self-contained token used for securely transmitting information between parties.
Oauth 2.0?
+
OAuth 2.0 is an authorization framework allowing applications limited access to user resources.
Oauth refresh token?
+
A refresh token is used to obtain a new access token without re-authentication.
Openid connect?
+
OpenID Connect is an authentication layer on top of OAuth 2.0 for verifying user identity.
Polling?
+
Polling repeatedly checks an API at intervals to get updates.
Rate limiting?
+
Rate limiting restricts the number of API requests a client can make in a given time period to prevent abuse.
Rest api documentation?
+
REST API documentation explains endpoints methods parameters responses and examples for developers.
Rest client?
+
A REST client sends HTTP requests to REST APIs and processes responses.
Rest server?
+
A REST server handles HTTP requests from clients processes them and sends responses.
Rest?
+
REST (Representational State Transfer) is an architectural style that uses HTTP methods and stateless communication.
Restful api resource?
+
A RESTful resource is an identifiable object that can be accessed and manipulated via HTTP methods.
Restful resource?
+
A RESTful resource is an object or entity that can be accessed and manipulated using HTTP methods.
Soap action?
+
SOAP action specifies the intent of a SOAP HTTP request for proper routing and execution.
Soap envelope?
+
SOAP envelope wraps the XML message to define structure header and body for SOAP communication.
Soap fault?
+
SOAP fault is an error message returned by a SOAP API to indicate processing issues.
Soap vs rest?
+
SOAP is protocol-based and formal with XML; REST is architectural stateless and uses lightweight formats like JSON.
Soap?
+
SOAP (Simple Object Access Protocol) is a protocol for exchanging structured XML-based messages over a network.
Statelessness in rest?
+
Statelessness means each request from a client to server contains all necessary information without relying on server memory.
Swagger/openapi?
+
Swagger/OpenAPI is a standard framework for documenting and testing RESTful APIs.
Throttling in apis?
+
Throttling limits API usage to control traffic and prevent server overload.
Tools are used for api testing?
+
Common tools include Postman SoapUI JMeter and RestAssured.
Types of apis?
+
Common types are REST SOAP GraphQL WebSocket and RPC APIs.
Versioning in rest apis?
+
Versioning ensures backward compatibility when APIs evolve using URLs headers or query parameters.
Webhook?
+
A webhook is an HTTP callback that notifies a client when an event occurs on the server.
Xml vs json in apis?
+
XML is verbose and strict; JSON is lightweight human-readable and widely used in REST APIs.

Architecture

+
Advantages of microservices?
+
Microservices offer scalability flexibility independent deployment fault isolation and easier maintenance.
Api gateway in microservices?
+
API Gateway is a single entry point for microservices handling routing authentication and monitoring.
Api?
+
An API (Application Programming Interface) allows software systems to communicate using defined interfaces.
Api-first design?
+
APIs are designed before implementation to ensure consistency, reusability, and integration readiness.
Architecture patterns?
+
Patterns like MVC, Microservices, Layered, and Event-Driven provide reusable solutions for common design problems and enforce consistency.
Base?
+
BASE is an alternative to ACID for distributed systems: Basically Available Soft state Eventually consistent.
Blue-green deployment?
+
Blue-green deployment uses two identical environments to switch traffic safely during releases.
Builder pattern?
+
Builder pattern separates the construction of a complex object from its representation.
Caching?
+
Caching stores frequently used data temporarily for faster access.
Cap theorem trade-off?
+
In distributed systems you can guarantee only two of Consistency Availability and Partition tolerance simultaneously.
Cap theorem?
+
CAP theorem states that a distributed system can provide only two of three: consistency availability partition tolerance.
Cdn?
+
A CDN (Content Delivery Network) delivers content via geographically distributed servers to improve performance.
Circuit breaker?
+
Circuit breaker prevents cascading failures in distributed systems by halting requests to failing services.
Client-server architecture?
+
Client-server architecture separates clients (users) and servers (service providers) communicating over a network.
Cloud-native architecture?
+
Designing applications to leverage cloud features like elasticity, microservices, containers, and managed services.
Component-based architecture?
+
It divides a system into modular, reusable components with defined interfaces, simplifying maintenance and scalability.
Container?
+
A container packages an application and its dependencies to run consistently across environments.
Containerization in architecture?
+
Using containers (like Docker) to package apps with dependencies for consistent deployment and scaling.
Cqrs (command query responsibility segregation)?
+
Separates read and write operations for scalability and performance, commonly used with event sourcing.
Cqrs?
+
CQRS (Command Query Responsibility Segregation) separates read and write operations for better scalability and performance.
Data lake?
+
A data lake stores structured and unstructured data at scale for analytics.
Data warehouse?
+
A data warehouse stores structured processed data optimized for reporting and analysis.
Database shard?
+
Database sharding splits data across multiple databases for scalability.
Denormalization?
+
Denormalization adds redundancy for improved read performance at the cost of storage and complexity.
Design for security in architecture?
+
Incorporates authentication, authorization, encryption, and secure coding practices from the start.
Design pattern in architecture?
+
A design pattern is a repeatable solution to a common software problem within a specific context.
Design patterns?
+
Design patterns are reusable solutions to common software design problems.
Diffbet architecture and design?
+
Architecture defines system structure and principles; design focuses on implementation details within that structure.
Diffbet monolithic and microservices architecture?
+
Monolithic combines all features in one codebase; microservices decouple services for independent deployment and scaling.
Diffbet stateless and stateful services?
+
Stateless services do not retain client information between requests; stateful services maintain client state.
Diffbet synchronous and asynchronous communication?
+
Synchronous waits for a response; asynchronous allows independent execution, improving scalability and responsiveness.
Disadvantages of microservices?
+
Challenges include increased complexity distributed system management network latency and testing difficulty.
Distributed system?
+
A distributed system consists of multiple independent computers working together as a single system.
Docker?
+
Docker is a platform to build ship and run applications in containers.
Domain-driven design (ddd)?
+
DDD is a design approach focusing on modeling software based on complex business domains.
Domain-driven design (ddd)?
+
DDD aligns software design with business domains, emphasizing entities, aggregates, and bounded contexts.
Event sourcing?
+
Event sourcing stores state changes as a sequence of events rather than the current state.
Event sourcing?
+
Stores system state as a sequence of events instead of current snapshots, enabling auditability and replay.
Event-driven architecture?
+
Architecture where components communicate by producing and consuming events, improving decoupling and scalability.
Eventual consistency?
+
Eventual consistency ensures that over time all nodes in a distributed system converge to the same state.
Explain acid properties.
+
ACID ensures database reliability: Atomicity Consistency Isolation Durability.
Explain adapter pattern.
+
Adapter pattern allows incompatible interfaces to work together by converting one interface to another.
Explain api throttling.
+
API throttling limits the number of requests a client can make to prevent overload.
Explain bounded context in ddd.
+
A bounded context defines a boundary within which a particular domain model applies.
Explain canary deployment.
+
Canary deployment releases a new version to a small subset of users to monitor impact before full rollout.
Explain cap theorem.
+
CAP theorem states that a distributed system can only guarantee two of: Consistency Availability Partition tolerance.
Explain cdn caching.
+
CDN caching stores content at edge servers near users for faster delivery.
Explain circuit breaker pattern.
+
Circuit breaker prevents repeated failures in distributed systems by stopping requests to failing services temporarily.
Explain database normalization.
+
Normalization organizes database tables to reduce redundancy and improve data integrity.
Explain decorator pattern.
+
Decorator pattern adds behavior to objects dynamically without modifying their structure.
Explain dependency injection.
+
Dependency injection provides components with their dependencies from external sources instead of creating them internally.
Explain eager loading.
+
Eager loading retrieves all related data upfront to avoid multiple queries.
Explain etl.
+
ETL (Extract Transform Load) is a process of moving and transforming data from source systems to a data warehouse.
Explain event-driven architecture.
+
Event-driven architecture uses events to trigger and communicate between decoupled services or components.
Explain eventual consistency vs strong consistency.
+
Eventual consistency allows temporary discrepancies converging later; strong consistency ensures immediate consistency across nodes.
Explain eventual consistency.
+
Eventual consistency allows data replicas to converge over time without guaranteeing immediate consistency.
Explain idempotency.
+
Idempotency ensures that multiple identical requests produce the same result without side effects.
Explain layered vs hexagonal architecture.
+
Layered architecture has rigid layers; hexagonal promotes testable decoupled core business logic.
Explain message queue.
+
A message queue allows asynchronous communication between components using messages.
Explain modular monolith.
+
A modular monolith organizes a single application into independent modules to gain maintainability without full microservices complexity.
Explain mvc architecture.
+
MVC (Model-View-Controller) separates application logic: Model handles data View handles UI and Controller handles input.
Explain mvc vs mvvm.
+
MVC separates Model View Controller; MVVM binds ViewModel with View using data binding reducing Controller logic.
Explain oauth.
+
OAuth is an authorization protocol allowing third-party applications to access user data without sharing credentials.
Explain polling vs webhooks.
+
Polling repeatedly checks for updates; webhooks notify automatically when an event occurs.
Explain retry pattern.
+
Retry pattern resends failed requests with delays to handle transient failures.
Explain rolling deployment.
+
Rolling deployment gradually replaces old instances with new versions without downtime.
Explain rolling vs blue-green deployment.
+
Rolling deployment updates instances gradually; blue-green deployment switches traffic between two identical environments.
Explain serverless architecture.
+
Serverless architecture runs code without managing servers; the cloud provider handles infrastructure automatically.
Explain service discovery.
+
Service discovery automatically detects services and their endpoints in dynamic environments.
Explain singleton pattern.
+
Singleton pattern ensures a class has only one instance and provides a global access point.
Explain soap service.
+
SOAP service uses XML-based messages and strict protocols for communication.
Explain sticky sessions.
+
Sticky sessions bind a client to a specific server instance to maintain state across multiple requests.
Explain sticky vs stateless sessions.
+
Sticky sessions bind users to a server; stateless sessions allow requests to be handled by any server.
Explain strategy pattern.
+
Strategy pattern defines a family of algorithms encapsulates each and makes them interchangeable.
Explain synchronous vs asynchronous apis.
+
Synchronous APIs wait for a response; asynchronous APIs allow processing in the background without waiting.
Explain the diffbet layered and microservices architectures.
+
Layered architecture is monolithic with multiple layers; microservices split functionality into independently deployable services.
Explain the diffbet soa and microservices.
+
SOA is an enterprise-level architecture with larger services; microservices break services into smaller independently deployable units.
Explain the diffbet synchronous and asynchronous communication.
+
Synchronous communication waits for a response immediately; asynchronous communication does not.
Explain the repository pattern.
+
The repository pattern abstracts data access logic providing a clean interface to query and manipulate data.
Explain vertical vs horizontal scaling.
+
Vertical scaling adds resources to a single machine; horizontal scaling adds more machines.
Façade pattern?
+
Façade pattern provides a simplified interface to a complex subsystem.
Fault tolerance?
+
Fault-tolerant systems continue functioning correctly even when components fail, minimizing downtime and data loss.
Graphql?
+
GraphQL is a query language for APIs allowing clients to request exactly the data they need.
Hexagonal architecture?
+
Hexagonal architecture (Ports & Adapters) isolates core logic from external systems through adapters.
High availability?
+
High availability ensures a system remains operational and accessible despite failures, often using redundancy and failover.
Jwt?
+
JWT (JSON Web Token) is a compact self-contained token used for securely transmitting information between parties.
Kafka?
+
Kafka is a distributed streaming platform for building real-time data pipelines and applications.
Kubernetes?
+
Kubernetes is an orchestration platform to deploy scale and manage containerized applications.
Layered architecture?
+
Layered architecture organizes code into layers such as presentation business and data access.
Layered architecture?
+
Layers (Presentation, Business, Data) separate concerns, making systems easier to develop, maintain, and test.
Lazy loading?
+
Lazy loading delays loading of resources until they are needed.
Load balancer?
+
A load balancer distributes network or application traffic across multiple servers to optimize resource use and uptime.
Load balancer?
+
A load balancer distributes traffic across servers to ensure high availability and performance.
Load balancing?
+
Load balancing distributes incoming traffic across multiple servers to improve performance and reliability.
Maintainability in architecture?
+
Maintainability is ease of making changes, fixing bugs, or adding features without affecting other parts of the system.
Message broker?
+
A message broker facilitates communication between services by routing and transforming messages.
Microkernel architecture?
+
Microkernel architecture provides a minimal core system with plug-in modules for extended functionality.
Microservices anti-pattern?
+
Microservices anti-patterns include tight coupling shared databases and improper service boundaries.
Microservices architecture?
+
Microservices architecture breaks an application into small independent services that communicate over APIs.
Microservices architecture?
+
Microservices split an application into independent, deployable services communicating via APIs, enhancing flexibility and scalability.
Monolith vs microservices?
+
Monolith is a single deployable application; microservices break functionality into independently deployable services.
Monolithic architecture?
+
Monolithic architecture is a single unified application where all components are tightly coupled.
Non-functional requirements (nfrs)?
+
NFRs define system qualities like performance, scalability, reliability, and security rather than features.
Observer pattern?
+
Observer pattern allows objects to subscribe and get notified when another object changes state.
Openid connect?
+
OpenID Connect is an authentication layer on top of OAuth 2.0 to verify user identity.
Orchestration in microservices?
+
Automated management of containers or services using tools like Kubernetes for scaling, networking, and fault tolerance.
Performance optimization?
+
Designing systems for low latency, efficient resource usage, and fast response times under load.
Proxy pattern?
+
Proxy pattern provides a placeholder or surrogate to control access to another object.
Proxy server?
+
A proxy server acts as an intermediary between a client and server for requests caching and security.
Rabbitmq?
+
RabbitMQ is a message broker that uses queues to enable asynchronous communication between services.
Reference architecture?
+
A reference architecture is a standardized template or blueprint for building systems within a domain, promoting best practices.
Rest vs soap?
+
REST is lightweight uses HTTP and stateless; SOAP is protocol-based heavier and supports strict contracts.
Restful architecture?
+
RESTful architecture uses stateless HTTP requests to manipulate resources following REST principles.
Restful service?
+
A RESTful service follows REST principles using standard HTTP methods for communication.
Reverse proxy?
+
A reverse proxy receives requests on behalf of servers and forwards them often for load balancing or security.
Reverse proxy?
+
A reverse proxy forwards requests from clients to backend servers often for load balancing.
Role of architecture documentation?
+
Communicates system structure, decisions, and rationale to stakeholders, enabling clarity and informed decision-making.
Role of architecture in devops?
+
Ensures system design supports CI/CD pipelines, automated testing, monitoring, and fast deployment cycles.
Scalability in architecture?
+
Scalability is a system’s ability to handle growing workloads by adding resources vertically or horizontally.
Service mesh?
+
A service mesh manages communication between microservices providing features like routing security and observability.
Service registry?
+
A service registry keeps track of all available services and their endpoints for dynamic discovery in microservices.
Service-oriented architecture (soa)?
+
SOA organizes software as interoperable services with standard communication protocols, promoting reuse across systems.
Sharding vs partitioning?
+
Sharding splits data horizontally across databases; partitioning divides tables within a database for management and performance.
Software architecture?
+
Software architecture defines the high-level structure of a system including its components their relationships and how they interact.
Software architecture?
+
Software architecture defines the high-level structure of a system, its components, and their interactions. It ensures scalability, maintainability, and alignment with business goals.
Solid principles?
+
SOLID principles guide object-oriented design: Single responsibility Open/closed Liskov substitution Interface segregation Dependency inversion.
Solution architecture vs enterprise architecture?
+
Solution architecture focuses on a specific project or system; enterprise architecture aligns all IT systems with business strategy.
Strangler pattern?
+
Strangler pattern gradually replaces legacy systems with new services over time.
Technical debt?
+
Accumulated shortcuts in design or code that require future rework, impacting maintainability and quality.
Token-based authentication?
+
Token-based authentication uses tokens to authenticate users without storing session state on the server.
Trade-off in architecture?
+
Balancing conflicting requirements like performance vs cost or flexibility vs simplicity to make informed design decisions.

Authorisation Cloud Security

+
redirect uris be exact?
+
To prevent open redirect vulnerabilities.
Aaud' claim?
+
Audience — the application that token is meant for.
Access review?
+
Feature to periodically validate user access.
Access token lifetime?
+
Time before token expires, usually minutes.
Access token lifetime?
+
Default 60–90 minutes depending on policies.
Access token manager?
+
Component controlling token storage/expiry.
Access token?
+
A credential used to access protected resources.
Access token?
+
Grants access to APIs.
Access token?
+
A token used to access APIs.
Acr'?
+
Authentication Context Class Reference — indicates authentication strength.
Acs url?
+
Assertion Consumer Service URL for SP to receive SAML assertions.
Acs url?
+
Endpoint where SP receives SAML responses.
Active-active vs active-passive ha?
+
Active-Active: all nodes serve traffic simultaneously., Active-Passive: one node is primary, another is standby for failover.
Adaptive authentication?
+
Dynamic authentication based on risk.
Adaptive sso?
+
Applies dynamic authentication conditions.
Address' scope?
+
Access to user address attributes.
Adfs application group?
+
Collection of OAuth/OIDC clients.
Adfs farm?
+
Cluster of servers providing redundancy.
Adfs federation metadata?
+
XML describing ADFS endpoints and certificates.
Adfs proxy?
+
Enables external access to internal ADFS.
Adfs web application proxy?
+
Proxy enabling external access to ADFS.
Adfs?
+
Active Directory Federation Services implementing SAML.
Adfs?
+
Active Directory Federation Services: on-prem identity provider.
Advantages
+
Supports SSO, secure token-based access, scoped permissions, mobile/server support, and third-party integrations.
Algorithms does oidc use?
+
RS256, ES256, HS256.
Always sign assertions?
+
Yes, signing is mandatory for security.
Amr'?
+
Authentication Methods Reference — methods used for authentication.
Api security in cloud?
+
API security protects cloud APIs from misuse attacks and unauthorized access.
App registration?
+
Configuration representing an application identity.
App role assignment?
+
Assign roles to users or groups for an app.
Apps must use pkce?
+
Mobile, SPAs, and any public clients.
Artifact resolution service?
+
Endpoint used to exchange artifact for assertion.
Assertion consumer service?
+
Endpoint where SP receives SAML responses.
Assertion in saml?
+
A package of security information issued by an Identity Provider.
Assertion signing?
+
Proof that assertion came from trusted IdP.
Attribute mapping in ping?
+
Mapping LDAP or internal attributes to SAML assertions.
Attribute mapping?
+
Mapping SAML attributes to SP identity fields.
Attribute mapping?
+
Mapping Okta attributes to app attributes.
Attribute mapping?
+
Mapping user attributes from IdP to SP.
Attribute release policy?
+
Rules governing which user data IdP sends.
Attributes secured?
+
By signing and optional encryption.
Attributestatement?
+
Part of assertion containing user attributes.
Audience claim?
+
Identifies the resource the token is valid for.
Audience mismatch'?
+
Assertion issued for wrong SP.
Audience restriction?
+
Ensures assertion is used only by intended SP.
Audience restriction?
+
Ensures tokens are used by intended SP.
Auth_time' claim?
+
Time the user was last authenticated.
Authentication api?
+
REST API enabling custom authentication UI.
Authentication methods does adfs support?
+
Windows auth, forms auth, certificate auth.
Authnrequest?
+
Authentication request from SP to IdP.
Authnrequest?
+
A request from SP to IdP to authenticate the user.
Authorization code flow secure?
+
Tokens issued directly to backend server, not exposed to browser.
Authorization code flow?
+
OAuth 2.0 flow for server-side apps; client exchanges an authorization code for an access token securely.
Authorization code flow?
+
A secure flow for server-side apps exchanging code for tokens.
Authorization code flow?
+
Exchanges code for tokens securely via backend.
Authorization code flow?
+
Most secure flow using server-side token exchange.
Authorization code grant
+
Used for web apps; user logs in, backend exchanges authorization code for access token securely.
Authorization endpoint?
+
Used to authenticate the user.
Authorization grant?
+
Credential representing user consent.
Authorization server responsibility?
+
Issue tokens, validate clients, manage scopes and consent.
Authorization server?
+
The server issuing access tokens and managing consent.
Auto healing in kubernetes?
+
Automatically restarts failed containers or reschedules pods to healthy nodes to ensure continuous availability.
Avoid idp-initiated sso?
+
SP-initiated is more secure.
Avoid implicit flow?
+
Yes, deprecated for security reasons.
Azure ad b2b?
+
Allows external identities to collaborate securely.
Azure ad b2c?
+
Identity platform for customer applications.
Azure ad connect?
+
Sync tool connecting on-prem AD with Azure AD.
Azure ad mfa?
+
Multi-factor authentication service to enhance security.
Azure ad saml?
+
Azure Active Directory supporting SAML-based SSO.
Azure ad vs adfs?
+
Azure AD = cloud; ADFS = on-prem federation.
Azure ad vs okta?
+
Azure AD is Microsoft cloud identity; Okta is independent IAM leader.
Azure ad vs pingfederate?
+
Azure AD = cloud-first; PingFederate = enterprise federation with granular control.
Azure ad?
+
A cloud-based identity and access management service by Microsoft.
Back-channel logout?
+
Logout using server-to-server messages.
Back-channel logout?
+
Server-to-server logout notifications.
Back-channel slo?
+
Uses server-to-server calls for logout.
Backup strategy for cloud?
+
Regular snapshots, versioned backups, geo-replication, and automated schedules ensure data recovery.
Bearer token?
+
A bearer token is a type of access token that allows access to resources when presented. No additional verification is required besides the token itself.
Bearer token?
+
Token that grants access without additional proof.
Best practices for jwt?
+
Use HTTPS, short-lived tokens, refresh tokens, sign tokens, and avoid storing sensitive data in payload.
Best practices for oauth/jwt in production?
+
Use HTTPS, short-lived tokens, refresh tokens, secure storage, signature verification, and proper logging/auditing.
Biggest benefit of sso?
+
User convenience and reduced login friction.
Biometric sso?
+
SSO authenticated via biometrics like fingerprint or face.
Can cookies break sso?
+
Yes, blocked cookies prevent session persistence.
Can jwt be revoked?
+
JWTs are stateless, so they cannot be revoked by default. Implement token blacklisting or short expiration for control.
Can metadata expire?
+
Yes, metadata can have expiration to enforce updates.
Can pingfederate encrypt assertions?
+
Yes, full support for SAML encryption.
Can refresh tokens be revoked?
+
Yes, through revocation endpoints.
Can scopes control mfa?
+
Yes, using acr/amr claims.
Can sso reduce password reuse?
+
Yes, only one password is needed.
Can sso reduce phishing?
+
Yes, users rarely enter passwords.
Can umbraco support jwt authentication?
+
Yes, JWT middleware can secure API endpoints and allow stateless authentication for custom Umbraco APIs.
Cannot oauth2 replace saml?
+
OAuth2 does not authenticate users; needs OIDC.
Certificate rollover?
+
Updating certificates without service disruption.
Certificate rollover?
+
Rotation of signing certificates to maintain security.
Check_session_iframe?
+
Used to track session changes via iframe polling.
Claim in jwt?
+
Claims are pieces of information asserted about a subject (user) in the token, e.g., sub, exp, role.
Claims provider trust?
+
Identity providers trusted by ADFS.
Client credentials flow?
+
Used for server-to-server authentication without user.
Client credentials flow?
+
Server-to-server authentication, not user login.
Client credentials grant
+
Used for machine-to-machine authentication without user involvement.
Client in oauth 2.0?
+
The application requesting access to a resource.
Client in oidc?
+
Application requesting tokens from IdP.
Client secret?
+
Confidential credential used by backend clients.
Client secret?
+
Credential used for confidential OAuth clients.
Client_id?
+
Unique identifier for the client.
Client_secret?
+
Secret only known to confidential clients.
Cloud access control?
+
Access control manages who can access cloud resources and what operations they can perform.
Cloud access key best practices?
+
Rotate keys use IAM roles avoid hardcoding keys and monitor usage.
Cloud access security broker (casb)?
+
CASB is a security solution placed between cloud users and services to enforce security policies.
Cloud access security broker (casb)?
+
CASB acts as a policy enforcement point between users and cloud services to monitor and protect sensitive data.
Cloud audit logging?
+
Audit logging records user activity configuration changes and security events in cloud platforms.
Cloud audit trail?
+
Audit trail logs record all user actions and system changes for accountability and compliance.
Cloud breach detection?
+
Breach detection identifies unauthorized access or compromise of cloud resources.
Cloud compliance auditing?
+
Compliance auditing verifies cloud configurations and operations meet regulatory requirements.
Cloud compliance frameworks?
+
Frameworks include ISO 27001 SOC 2 HIPAA PCI DSS and GDPR.
Cloud compliance standards?
+
Standards like ISO 27001, SOC 2, GDPR, HIPAA ensure cloud providers meet regulatory security requirements.
Cloud data backup?
+
Data backup creates copies of cloud data to restore in case of loss or corruption.
Cloud data classification?
+
Data classification categorizes cloud data by sensitivity to apply proper security controls.
Cloud data residency?
+
Data residency ensures cloud data is stored in specified geographic locations to comply with regulations.
Cloud ddos mitigation best practices?
+
Use distributed protection traffic filtering auto-scaling and monitoring.
Cloud disaster recovery?
+
Disaster recovery ensures cloud workloads can recover quickly from failures or attacks.
Cloud encryption best practices?
+
Use strong algorithms rotate keys encrypt in transit and at rest and protect key management.
Cloud encryption in transit and at rest?
+
In-transit encryption protects data during network transfer. At-rest encryption protects stored data on disk or database.
Cloud encryption key rotation?
+
Key rotation periodically updates encryption keys to reduce the risk of compromise.
Cloud endpoint security best practices?
+
Install agents enforce policies monitor behavior and isolate compromised endpoints.
Cloud endpoint security?
+
Endpoint security protects devices that access cloud resources from malware breaches or unauthorized access.
Cloud firewall best practices?
+
Use least privilege segment networks update rules regularly and log traffic.
Cloud firewall?
+
Cloud firewall is a network security service to filter and monitor traffic to cloud resources.
Cloud forensic investigation?
+
Cloud forensics investigates breaches or attacks to identify root causes and affected assets.
Cloud identity federation vs sso?
+
Federation allows using external identities; SSO allows single authentication across multiple apps.
Cloud identity federation?
+
Allows users to access multiple cloud services using single identity, enabling SSO across providers.
Cloud identity management?
+
Cloud identity management handles user authentication authorization and lifecycle in cloud services.
Cloud incident management?
+
Incident management handles security events to minimize impact and prevent recurrence.
Cloud incident response plan?
+
Plan outlines procedures roles and tools for responding to cloud security incidents.
Cloud incident response?
+
Incident response is the process of detecting analyzing and mitigating security incidents in the cloud.
Cloud key management?
+
Cloud key management creates stores rotates and controls access to cryptographic keys.
Cloud key rotation policy?
+
Policy defines frequency and procedure for rotating encryption keys.
Cloud logging and monitoring?
+
Collects audit logs, metrics, and events to detect anomalies, unauthorized access, and security breaches.
Cloud logging best practices?
+
Centralize logs enable retention monitor for anomalies and secure log storage.
Cloud logging retention policy?
+
Defines how long logs are stored and ensures they are archived securely for compliance.
Cloud logging?
+
Cloud logging records user activity system events and access for auditing and monitoring.
Cloud malware protection?
+
Malware protection detects and removes malicious software from cloud workloads and endpoints.
Cloud misconfiguration?
+
Misconfiguration occurs when cloud resources are improperly configured creating security risks.
Cloud monitoring best practices?
+
Monitor critical assets configure alerts and integrate with SIEM and incident response.
Cloud monitoring?
+
Cloud monitoring tracks resource usage performance and security threats in real time.
Cloud monitoring?
+
Monitoring tools track performance, security events, and availability, helping identify issues proactively.
Cloud multi-factor authentication best practices?
+
Enable MFA for all users use strong methods like TOTP or hardware tokens.
Cloud native ha design?
+
Using redundancy, distributed systems, microservices, and auto-scaling to achieve high availability.
Cloud native security?
+
Security designed specifically for cloud services and microservices, including containers, Kubernetes, and serverless workloads.
Cloud network monitoring?
+
Network monitoring observes traffic flows detects anomalies and enforces segmentation.
Cloud network segmentation?
+
Network segmentation isolates cloud workloads to reduce attack surfaces.
Cloud patch management?
+
Patch management updates cloud systems and applications to fix vulnerabilities.
Cloud patch management?
+
Automated application of security patches to OS, software, and applications running in the cloud.
Cloud penetration testing policy?
+
Policy defines rules and approvals required before conducting penetration tests on cloud services.
Cloud penetration testing tools?
+
Tools include Kali Linux Metasploit Nmap Burp Suite and cloud provider-native tools.
Cloud penetration testing?
+
Penetration testing simulates attacks on cloud systems to identify vulnerabilities.
Cloud penetration testing?
+
Ethical testing to identify vulnerabilities and misconfigurations in cloud infrastructure.
Cloud role-based access control (rbac)?
+
RBAC assigns permissions based on user roles to enforce least privilege.
Cloud secrets management?
+
Secrets management stores and controls access to sensitive information like API keys and passwords.
Cloud secure devops?
+
Secure DevOps integrates security into DevOps processes and CI/CD pipelines.
Cloud secure gateway?
+
Secure gateway controls and monitors access between users and cloud applications.
Cloud security assessment?
+
Assessment evaluates cloud infrastructure configurations and practices against security standards.
Cloud security auditing?
+
Auditing evaluates cloud resources and policies to ensure security and compliance.
Cloud security automation tools?
+
Tools include AWS Config Azure Security Center GCP Security Command Center and Terraform with security checks.
Cloud security automation?
+
Automation uses scripts or tools to enforce security policies and remediate threats automatically.
Cloud security automation?
+
Automates security checks, patching, and policy enforcement to reduce human error and improve speed.
Cloud security baseline?
+
Security baseline defines standard configurations and controls for cloud environments.
Cloud security best practices?
+
Enforce IAM encryption monitoring logging patching least privilege and incident response.
Cloud security group best practices?
+
Use least privilege separate environments restrict inbound/outbound rules and monitor traffic.
Cloud security incident types?
+
Types include data breach misconfiguration account compromise malware infection and insider threats.
Cloud security monitoring tools?
+
Tools include AWS GuardDuty Azure Defender GCP Security Command Center and third-party SIEM.
Cloud security orchestration?
+
Security orchestration automates workflows threat response and remediation across cloud systems.
Cloud security policy?
+
Policy defines rules standards and practices to protect cloud resources.
Cloud security posture management (cspm)?
+
CSPM continuously monitors cloud environments to detect misconfigurations and compliance risks.
Cloud security posture management (cspm)?
+
CSPM tools continuously monitor misconfigurations, vulnerabilities, and compliance risks in cloud environments.
Cloud security?
+
Cloud security is the set of policies technologies and controls designed to protect data applications and infrastructure in cloud environments.
Cloud security?
+
Cloud security involves policies, controls, procedures, and technologies that protect data, applications, and services in the cloud. It ensures confidentiality, integrity, and availability (CIA) of cloud resources.
Cloud siem?
+
Cloud SIEM centralizes log collection analysis alerting and reporting for security events.
Cloud threat detection?
+
Threat detection identifies malicious activity or anomalies in cloud environments.
Cloud threat intelligence?
+
Threat intelligence provides data on current security threats and vulnerabilities to enhance cloud defenses.
Cloud threat modeling?
+
Threat modeling identifies potential threats and vulnerabilities in cloud systems and designs mitigation strategies.
Cloud threat modeling?
+
Identifying potential threats, vulnerabilities, and mitigation strategies for cloud architectures.
Cloud vpn?
+
Cloud VPN securely connects on-premises networks to cloud resources over encrypted tunnels.
Cloud vulnerability assessment?
+
It identifies security weaknesses in cloud infrastructure applications and configurations.
Cloud vulnerability management?
+
Vulnerability management identifies prioritizes and remediates security weaknesses.
Cloud vulnerability scanning?
+
Scanning detects security flaws in cloud infrastructure applications and containers.
Cloud workload isolation?
+
Workload isolation separates applications or tenants to prevent lateral movement of threats.
Cloud workload protection platform (cwpp)?
+
CWPP provides security for workloads running across cloud VMs containers and serverless environments.
Cloud-native security?
+
Cloud-native security integrates security controls directly into cloud applications and infrastructure.
Common saml attributes?
+
email, firstName, lastName, employeeID.
Compliance in cloud security?
+
Compliance ensures cloud deployments adhere to regulatory standards like GDPR HIPAA or PCI DSS.
Compliance monitoring in cloud?
+
Continuous auditing to ensure resources follow regulatory and internal security standards.
Conditional access?
+
Policies restricting token issuance based on conditions.
Conditional access?
+
Policy engine controlling access based on conditions.
Confidential client?
+
Client that securely stores secrets (backend server).
Configuration management in cloud security?
+
Configuration management ensures cloud resources are deployed securely and consistently.
Consent screen?
+
UI shown to user listing requested permissions.
Container security?
+
Container security protects containerized applications and their orchestration platforms like Docker and Kubernetes.
Container security?
+
Securing containerized applications using image scanning, runtime protection, and least privilege.
Continuous compliance?
+
Automated monitoring of cloud resources to maintain compliance with regulations like HIPAA or GDPR.
Cookies relate to sso?
+
SSO often uses session cookies to maintain authenticated sessions across multiple apps or domains.
Credential stuffing protection?
+
OIDC frameworks block repeated unsuccessful logins.
Cross-domain sso?
+
SSO across different organizations.
Csrf state parameter?
+
Used to protect against CSRF attacks during authentication.
Custom scopes?
+
App-defined permissions for additional claims.
Data loss prevention (dlp)?
+
DLP prevents unauthorized access sharing or leakage of sensitive cloud data.
Data masking?
+
Hides sensitive data in non-production environments to protect privacy while allowing application testing.
Ddos protection in cloud?
+
Defends cloud services against Distributed Denial of Service attacks using mitigation, traffic filtering, and scaling.
Decentralized identity?
+
User-controlled identity using blockchain-based models.
Delegation?
+
Acting on behalf of a user with limited privileges.
Destination mismatch'?
+
Assertion sent to wrong ACS URL.
Device code flow?
+
Used by devices with no browser or limited input.
Device code flow?
+
Authentication for devices without browsers.
Diffbet access token and refresh token?
+
Access tokens are short-lived tokens for resource access. Refresh tokens are long-lived and used to obtain new access tokens without re-authentication.
Diffbet app registration and enterprise application?
+
App Registration = app identity; Enterprise App = SSO configuration instance.
Diffbet auth code and auth code + pkce?
+
PKCE adds code verifier & challenge for extra security.
Diffbet authentication and authorization?
+
Authentication verifies identity; authorization defines what resources an authenticated user can access.
Diffbet authentication and authorization?
+
Authentication verifies identity; authorization verifies permissions.
Diffbet availability zone and region?
+
A Region is a geographical location. An Availability Zone (AZ) is an isolated data center within a region providing HA.
Diffbet dr and ha?
+
HA focuses on real-time availability and minimal downtime. DR is about recovering after a major failure or disaster, which may involve longer restoration times.
Diffbet icontentservice and ipublishedcontent?
+
IContentService is used for editing/staging content. IPublishedContent is for reading published content efficiently.
Diffbet id_token and access_token?
+
ID token is for authentication; access token is for authorization.
Diffbet oauth 1.0 and 2.0?
+
OAuth 1.0 requires cryptographic signing; OAuth 2.0 uses bearer tokens, simpler flow, and supports multiple grant types like Authorization Code and Client Credentials.
Diffbet oauth and openid connect?
+
OAuth is for authorization; OIDC is an authentication layer on top of OAuth providing user identity.
Diffbet oauth scopes and claims?
+
Scopes define the permissions requested; claims define attributes about the user or session.
Diffbet par and jar?
+
PAR = push request; JAR = sign request.
Diffbet published content and draft content?
+
Draft content is editable but not visible to the public; published content is live on the website.
Diffbet saml and jwt?
+
SAML uses XML for identity assertions; JWT uses JSON. JWT is lighter and easier for APIs, while SAML is enterprise-oriented.
Diffbet saml and jwt?
+
SAML = XML assertions; JWT = JSON tokens.
Diffbet saml and oauth?
+
SAML is for SSO using XML; OAuth is authorization using JSON/REST.
Diffbet saml and oidc?
+
SAML uses XML and is enterprise-focused; OIDC uses JSON and supports modern apps.
Diffbet sso and mfa?
+
SSO = one login across apps; MFA = additional security factors during login.
Diffbet sso and oauth?
+
SSO is mainly for authentication across apps. OAuth is for delegated authorization without sharing credentials.
Diffbet sso and password sync?
+
SSO shares authentication state; password sync copies passwords across systems.
Diffbet sso and slo?
+
SSO = login across apps; SLO = logout across apps.
Diffbet stateless and stateful authentication?
+
JWT enables stateless authentication—server does not store session info. Traditional sessions are stateful, stored on the server.
Diffbet symmetric and asymmetric encryption?
+
Symmetric uses same key for encryption and decryption. Asymmetric uses public/private key pairs. Asymmetric is used in secure key exchange.
Diffbet umbraco api controllers and mvc controllers?
+
API controllers return JSON or XML data for apps; MVC controllers render views/templates.
Discovery document?
+
Well-known configuration endpoint for OIDC.
Discovery important?
+
Allows dynamic configuration of OIDC clients.
Distributed denial-of-service (ddos) protection?
+
DDoS protection mitigates attacks that overwhelm cloud services with traffic.
Do access tokens depend on scopes?
+
Yes, scopes define API permissions.
Do all protocols support slo?
+
Yes, but implementations vary.
Do all sps support sso?
+
Not always — legacy apps may need custom connectors.
Do browsers impact sso?
+
Yes, privacy modes may block redirects/cookies.
Do not log tokens?
+
Never log access or refresh tokens.
Does adfs support mfa?
+
Yes, with built-in and external providers.
Does adfs support oauth2?
+
Yes, since ADFS 3.0.
Does adfs support saml sso?
+
Yes, as IdP and SP.
Does azure ad support saml?
+
Yes, SAML 2.0 with IdP-initiated and SP-initiated flows.
Does id token depend on scopes?
+
Yes, claims in ID Token depend on scopes.
Does jwt work?
+
Server generates JWT after authentication. Client stores it (usually in local storage). Subsequent requests include the token in the Authorization header for stateless authentication.
Does oidc support single logout?
+
Yes, through RP-Initiated and Front/Back-channel logout.
Does oidc support sso?
+
Yes, OIDC provides Single Sign-On functionality.
Does okta expose jwks?
+
/oauth2/v1/keys endpoint.
Does okta support password sync?
+
Yes, via provisioning connectors.
Does pingfederate issue jwt tokens?
+
Yes, for access and id tokens.
Does pingfederate support mfa?
+
Yes, via PingID or third-party integrations.
Does pingfederate support pkce?
+
Yes, for public clients.
Does pingfederate support saml sso?
+
Yes, both IdP and SP roles.
Does saml ensure security?
+
Uses XML signatures, encryption, certificates, and timestamps.
Does saml metadata contain?
+
Certificates, endpoints, SSO URLs, entity IDs.
Does saml stand for?
+
Security Assertion Markup Language.
Does saml use tokens?
+
Yes, SAML assertions are XML-based tokens.
Does silent logout mean?
+
Logout without redirecting the user.
Does slo fail?
+
Different implementations or expired sessions.
Does sso break?
+
Wrong certificates, clock skew, misconfigured endpoints.
Does sso enhance security?
+
Reduces password fatigue, centralizes authentication policies, enables MFA, and minimizes login-related vulnerabilities.
Does sso help in compliance?
+
Yes, supports SOC2, HIPAA, GDPR requirements.
Does sso improve auditability?
+
Centralized login logs.
Does sso improve security?
+
Reduces password fatigue, phishing risk, and enforces central policies.
Does sso improve security?
+
Centralized authentication and MFA enforcement.
Does sso increase productivity?
+
Yes, no repeated logins.
Does sso reduce attack surface?
+
Yes, fewer passwords and login endpoints.
Does sso reduce helpdesk calls?
+
Reduces password reset requests.
Does sso require accurate time sync?
+
Yes, tokens require clock accuracy.
Does sso require certificate management?
+
Yes, periodic rollover is required.
Does sso work?
+
A centralized identity provider authenticates the user, issues a token or cookie, and applications trust this token to grant access.
Domain federation?
+
Configures ADFS or external IdP to authenticate domain users.
Dpop?
+
Demonstration of Proof-of-Possession; prevents token theft misuse.
Dynamic client registration?
+
Allows clients to auto-register at IdP.
Dynamic group?
+
Group with rule-based membership.
Email' scope?
+
Access to user email and email_verified.
Encode saml messages?
+
To ensure safe transport via URLs or POST.
Encrypt sensitive attributes?
+
Highly recommended.
Encryption at rest?
+
Encryption at rest protects stored data using cryptographic techniques.
Encryption errors occur?
+
Incorrect certificate or key mismatch.
Encryption in cloud?
+
Encryption protects data in transit and at rest using algorithms like AES or RSA. It prevents unauthorized access to sensitive cloud data.
Encryption in transit?
+
Encryption in transit protects data as it travels over networks between cloud services or users.
End_session endpoint?
+
Used for OIDC logout operations.
Endpoint security in cloud?
+
Protects client devices, VMs, and containers from malware, unauthorized access, and vulnerabilities.
Enforce mfa?
+
Improves security for sensitive resources.
Enterprise application?
+
Represents an SP configuration used for SSO.
Enterprise sso?
+
SSO for employees using enterprise IdPs.
Entity category?
+
Classification of SP/IdP capabilities.
Entity id?
+
A unique identifier for SP or IdP in SAML.
Example of federation hub?
+
Azure AD, ADFS, Okta, PingFederate.
Exp' claim?
+
Expiration timestamp.
Expired assertion'?
+
Assertion outside NotOnOrAfter time.
Explain auto scaling.
+
Auto Scaling automatically adjusts compute resources based on demand, improving availability and cost efficiency.
Explain bastion host.
+
A Bastion host is a secure jump server used to access instances in private networks.
Explain cloud firewall.
+
Cloud firewalls filter network traffic at the edge or VM level, enforcing security rules to prevent unauthorized access.
Explain disaster recovery in cloud.
+
Disaster Recovery (DR) is a set of processes to restore cloud applications and data after failures. It involves backups, replication, multi-region deployment, and failover strategies.
Failover in cloud?
+
Automatic switching to a redundant system when a primary system fails, ensuring service continuity.
Fapi?
+
Financial grade API security profile for OIDC/OAuth2.
Fault tolerance in cloud?
+
Fault tolerance ensures the system continues functioning despite component failures using redundancy and failover.
Federated identity?
+
Using external identity providers like Google or Azure AD.
Federation hub?
+
Central IdP connecting multiple SPs.
Federation in azure ad?
+
Using ADFS or external IdPs for authentication.
Federation in sso?
+
Trust relationship enabling cross-domain authentication.
Federation metadata?
+
Configuration XML exchanged between IdP and SP.
Federation?
+
Trust between identity providers and service providers.
Fine-grained authorization?
+
Scoped permissions down to resource-level.
Flow is best for iot devices?
+
Device Code flow.
Flow is best for machine-to-machine?
+
Client Credentials.
Flow is best for mobile?
+
Authorization Code with PKCE.
Flow is best for spas?
+
Authorization Code with PKCE (Implicit avoided).
Flow is more secure?
+
SP-initiated, due to request ID validation.
Flow should spas use?
+
Authorization Code Flow with PKCE.
Flow supports refresh tokens?
+
Authorization Code Flow and Hybrid Flow.
Flow supports sso?
+
Authorization Code or Hybrid flow via OIDC.
Flows does azure ad support?
+
Auth Code, PKCE, Client Credentials, Device Code, ROPC.
Format are oauth tokens?
+
Typically JWT or opaque tokens.
Format does oidc use?
+
JSON, REST APIs, and JWT tokens.
Formats can access tokens use?
+
JWT or opaque format.
Formats can id tokens use?
+
Always JWT.
Frontchannel logout?
+
Logout performed via the browser using redirects.
Front-channel logout?
+
Logout via browser redirects.
Front-channel logout?
+
Browser-based logout using redirects.
Front-channel slo?
+
Uses browser redirects for logout.
Global logout?
+
Logout from entire identity federation.
Grant type
+
Defines how the client collects and exchanges access tokens.
Graph api?
+
API to manage users, groups, and apps.
Happens if idp is down during slo?
+
SPs may not logout properly.
Haproxy in cloud?
+
HAProxy is a load balancer and proxy server that supports high availability and failover.
High availability (ha) in cloud?
+
HA ensures that cloud services remain accessible with minimal downtime. It uses redundancy, failover mechanisms, and load balancing to maintain continuous operations.
Home realm discovery?
+
Identifies which IdP user belongs to.
Home realm discovery?
+
Choosing correct IdP based on the user.
Http artifact binding?
+
Message reference is sent, not entire assertion.
Http post binding?
+
SAML message sent through an HTML form post.
Http redirect binding?
+
SAML message is sent via URL query string.
Https requirement?
+
OAuth 2.0 must use HTTPS for all communication.
Hybrid cloud security?
+
Hybrid cloud security protects workloads and data across on-premises and cloud environments.
Hybrid flow?
+
Combination of implicit + authorization code (OIDC).
Hybrid flow?
+
Combination of Implicit and Authorization Code flows.
Iam in cloud security?
+
Identity and Access Management controls who can access cloud resources and what actions they can perform.
Iam in cloud security?
+
Identity and Access Management controls who can access cloud resources and what they can do. It includes authentication, authorization, roles, policies, and MFA.
Iat' claim?
+
Issued-at timestamp.
Id token signature?
+
Verifies integrity and authenticity.
Id token?
+
JWT token containing authentication details.
Id token?
+
A JWT containing identity information about the user.
Id_token?
+
OIDC token containing user identity claims.
Id_token_hint?
+
Hint for logout identifying user's ID Token.
Identifier (entity id)?
+
SP unique identifier configured in Azure AD.
Identity brokering?
+
IdP sits between user and multiple IdPs.
Identity federation?
+
Identity federation allows users to access multiple cloud services using a single identity.
Identity federation?
+
A trust relationship allowing different systems to share authentication.
Identity hub?
+
A centralized identity broker connecting many IdPs.
Identity protection?
+
Detects risky logins and risky users.
Identity provider (idp)?
+
An IdP is a trusted service that authenticates users and issues tokens or assertions for SSO.
Identity provider (idp)?
+
Authenticates users and issues claims.
Identity provider (idp)?
+
A service that authenticates a user and issues SAML assertions.
Identity token validation?
+
Ensuring token signature, audience, and issuer are correct.
Idp discovery?
+
Selecting the correct identity provider for login.
Idp federation?
+
One IdP authenticates users for many SPs.
Idp in sso?
+
Identity Provider — authenticates the user.
Idp metadata url?
+
URL where SP fetches IdP metadata.
Idp proxying?
+
IdP acting as intermediary between user and another IdP.
Idp?
+
System that authenticates users and issues tokens/assertions.
Idp-initiated sso?
+
Login initiated from Identity Provider.
Idp-initiated sso?
+
User starts login at the Identity Provider.
Immutable infrastructure?
+
Infrastructure that is never modified after deployment, only replaced. It ensures consistency and security.
Impersonation?
+
User acting as another identity — dangerous and restricted.
Implicit flow deprecated?
+
Exposes tokens to browser and insecure environments.
Implicit flow deprecated?
+
Less secure, exposes tokens in browser URL.
Implicit flow?
+
Legacy browser-based flow without backend; not recommended.
Implicit flow?
+
Old flow that returns tokens via browser fragments.
Implicit grant flow?
+
OAuth 2.0 flow for client-side apps where tokens are returned directly without client secret.
Implicit vs code flow?
+
Code Flow more secure; Implicit deprecated.
Incremental consent?
+
Requesting only partial permissions at first.
Inresponseto attribute?
+
Links the response to the matching AuthnRequest.
Inresponseto missing'?
+
IdP did not include request ID; insecure for SP-initiated.
Introspection endpoint?
+
Used to validate opaque access tokens.
Intrusion detection and prevention (ids/ips)?
+
IDS/IPS monitors network traffic for malicious activity, raising alerts or blocking threats.
Intrusion detection system (ids)?
+
IDS monitors cloud traffic for malicious activity or policy violations.
Intrusion prevention system (ips)?
+
IPS not only detects but also blocks malicious traffic in real time.
Invalid signature' error?
+
Assertion signature mismatch or wrong certificate.
Is jwt used in microservices?
+
JWT allows secure stateless communication between microservices, with each service verifying the token without a central session store.
Is jwt verified?
+
Server uses the secret or public key to verify the token’s signature and validity, ensuring it was issued by a trusted source.
Is more reliable — front or back channel?
+
Back-channel, because it avoids browser issues.
Is oauth 2.0 for authentication?
+
Not by design; it's for authorization. OIDC adds authentication.
Is oauth 2.0 stateful or stateless?
+
Can be either, depending on token type and architecture.
Is oidc authentication or authorization?
+
OIDC is authentication; OAuth2 is authorization.
Is oidc stateless or stateful?
+
Stateless — relies on JWT tokens.
Is oidc suitable for mobile apps?
+
Yes, highly optimized for mobile clients.
Is saml used for authentication or authorization?
+
Primarily authentication; asserts user identity to SP.
Is sso a single point of failure?
+
Yes, if IdP is down, login for all apps fails.
Is sso for authentication or authorization?
+
SSO is primarily for authentication.
Is sso latency-prone?
+
Yes, due to redirects and token validation.
Is token expiry handled in oauth?
+
Access tokens have a short TTL; refresh tokens are used to request a new access token without user interaction.
Iss' claim?
+
Issuer identifier.
Issuer claim?
+
Identifies authorization server that issued the token.
Issuer mismatch'?
+
Incorrect IdP entity ID used.
Jar (jwt authorization request)?
+
Authorization request packaged as signed JWT.
Jarm?
+
JWT-secured Authorization Response Mode — adds signing to auth responses.
Just-in-time provisioning?
+
Provision user accounts at login time.
Just-in-time provisioning?
+
User is created automatically during login.
Jwks endpoint?
+
JSON Web Key Set for token verification keys.
Jwks uri?
+
Endpoint serving public keys for validating tokens.
Jwks?
+
JSON Web Key Set for validating tokens.
Jwt header?
+
Header specifies the signing algorithm (e.g., HS256) and token type (JWT).
Jwt kid field?
+
Key ID to identify which signing key to use.
Jwt payload?
+
The payload contains claims, which are statements about the user or session (e.g., user ID, roles, expiration).
Jwt signature?
+
The signature ensures the token’s integrity. It is generated using a secret (HMAC) or private key (RSA/ECDSA).
Jwt signature?
+
Cryptographic signature verifying authenticity.
Jwt token?
+
Self-contained token with claims.
Jwt?
+
JSON Web Token (JWT) is a compact, URL-safe token format used to securely transmit claims between parties. It includes a header, payload, and signature.
Jwt?
+
JSON Web Token — compact, signed token.
Kerberos?
+
Network authentication protocol used in Windows SSO.
Key components of cloud security?
+
Key components include identity and access management (IAM) data protection network security monitoring and compliance.
Key management service (kms)?
+
KMS is a cloud service for creating managing and controlling encryption keys securely.
Key management service (kms)?
+
KMS securely creates, stores, and rotates encryption keys for cloud resources.
Kubernetes role in ha?
+
Kubernetes provides HA by managing pods across multiple nodes, self-healing, and load balancing.
Limit attribute sharing?
+
Minimize data to reduce privacy risk.
Limit scopes?
+
Yes, always follow least privilege.
Load balancer?
+
A load balancer distributes incoming traffic across multiple servers to ensure high availability and performance.
Logging & auditing in cloud security?
+
Captures user actions and system events to detect breaches, analyze incidents, and meet compliance.
Logout method is most reliable?
+
Back-channel logout.
Main cloud security challenges?
+
Challenges include data breaches insecure APIs misconfigured cloud services insider threats and compliance issues.
Main types of cloud security?
+
Includes Data Security, Network Security, Identity & Access Management (IAM), Application Security, and Endpoint Security. It protects cloud workloads from breaches and vulnerabilities.
Metadata important?
+
Ensures both IdP and SP trust each other and understand endpoints.
Metadata signature?
+
Indicates authenticity of metadata file.
Mfa in oauth?
+
Additional step enforced by authorization server.
Microsegmentation in cloud security?
+
Divides networks into smaller segments to isolate workloads and minimize lateral attack movement.
Microsoft graph permissions?
+
Scopes that define what an app can access.
Monitor saml logs?
+
Detects anomalies and attacks.
Mtls in oauth?
+
Mutual TLS binding tokens to client certificates.
Multi-cloud security?
+
Multi-cloud security manages security consistently across multiple cloud providers.
Multi-factor authentication (mfa)?
+
MFA requires multiple forms of verification to access cloud resources enhancing security.
Multi-factor authentication (mfa)?
+
MFA requires two or more verification methods to access cloud resources, enhancing security beyond passwords.
Multi-federation?
+
Multiple IdPs serving different user groups.
Multi-region deployment?
+
Deploying resources in multiple regions improves disaster recovery, redundancy, and availability.
Multi-tenant app?
+
App serving multiple organizations with separate identities.
Multi-tenant identity?
+
Multiple tenants share identity infrastructure.
Nameid formats?
+
EmailAddress, Persistent, Transient, Unspecified.
Nameid?
+
Unique identifier for the user in SAML.
Nameidmapping?
+
Mapping NameIDs between IdP and SP.
Network acl?
+
Network ACL is a stateless firewall used to control traffic at the subnet level.
Network acl?
+
A Network Access Control List controls traffic at the subnet level. It provides an additional layer beyond security groups.
Nonce' claim?
+
Used to prevent replay attacks.
Nonce used for?
+
To prevent replay attacks.
Nonce used for?
+
Prevents replay attacks.
Nonce?
+
Unique value used in ID token to prevent replay.
Not to store tokens?
+
LocalStorage or unencrypted browser memory.
Notbefore claim?
+
Defines earliest time the assertion is valid.
Notonorafter claim?
+
Expiration time of assertion.
Oauth 2
+
OAuth 2 is an open authorization framework enabling secure access delegation without sharing passwords.
Oauth 2.0 grant types?
+
Auth Code, PKCE, Client Credentials, Password, Implicit, Device Code.
Oauth 2.0?
+
An authorization framework allowing third-party apps to access user resources without sharing passwords.
Oauth 2.1?
+
A simplification removing implicit and ROPC flows; PKCE required.
Oauth backchannel logout?
+
Mechanism to notify apps of user logout.
Oauth device flow?
+
Auth flow for devices without browsers.
Oauth grant types?
+
Common grant types: Authorization Code, Implicit, Password Credentials, Client Credentials. They define how clients obtain access tokens.
Oauth introspection endpoint?
+
API to check token validity for opaque tokens.
Oauth revocation endpoint?
+
API to revoke access or refresh tokens.
Oauth?
+
OAuth is an open-standard authorization protocol that allows third-party apps to access user resources without sharing credentials. It issues access tokens to grant limited access to resources.
Oauth2 used for?
+
Authorization, not authentication.
Oauth2 with sso integration?
+
OAuth2 with SSO enables a single login using OAuth’s token-based authorization to access multiple protected services.
Oidc claims?
+
Statements about a user (e.g., email, name).
Oidc created?
+
To enable secure user authentication using modern JSON/REST technology.
Oidc discovery document?
+
Well-known configuration containing endpoints and metadata.
Oidc federation?
+
Uses OIDC for federated identity.
Oidc flow is best for spas?
+
Auth Code Flow with PKCE.
Oidc in apple sign-in?
+
Apple Sign-In is based on OIDC standards.
Oidc in auth0?
+
Auth0 fully supports OIDC flows and JWT issuance.
Oidc in aws cognito?
+
Cognito provides OIDC-based hosted UI flows.
Oidc in azure ad?
+
Azure AD supports OIDC with Microsoft Identity platform.
Oidc in fusionauth?
+
FusionAuth supports OIDC, MFA, and OAuth2 flows.
Oidc in google identity?
+
Google uses OIDC for all user authentication.
Oidc in keycloak?
+
Keycloak is an open-source IdP supporting OIDC.
Oidc in okta?
+
Okta provides custom and default OIDC authorization servers.
Oidc in pingfederate?
+
PingFederate supports OIDC with OAuth AS extensions.
Oidc in salesforce?
+
Salesforce acts as an OIDC provider for SSO.
Oidc in sso?
+
OAuth2-based identity layer issuing ID tokens.
Oidc preferred over saml?
+
Lightweight JSON tokens, mobile-ready, modern architecture.
Oidc scopes?
+
Permissions for claims in ID Token/UserInfo.
Oidc vs api keys?
+
OIDC is secure and user-based; API keys are static secrets.
Oidc vs basic auth?
+
OIDC uses token-based modern auth; Basic Auth sends credentials each time.
Oidc vs jwt?
+
OIDC uses JWT; JWT is a token format, not a protocol.
Oidc vs kerberos?
+
OIDC = web/mobile; Kerberos = internal network protocol.
Oidc vs oauth device flow?
+
OIDC is for login; Device Flow is for non-browser devices.
Oidc vs oauth2?
+
OIDC adds authentication; OAuth2 only handles authorization.
Oidc vs password auth?
+
OIDC uses tokens; password auth uses credentials directly.
Oidc vs saml?
+
OIDC uses JSON/REST; SAML uses XML. OIDC suits mobile and modern apps.
Oidc vs ws-fed?
+
OIDC is modern JSON-based; WS-Fed is legacy Microsoft protocol.
Oidc?
+
OpenID Connect is an identity layer built on top of OAuth 2.0 to authenticate users.
Okta api token?
+
Token used for administrative API calls.
Okta app integration?
+
Application configuration for SSO.
Okta asa?
+
Advanced Server Access for SSH/RDP identity access.
Okta authentication api?
+
REST API for user authentication and token issuance.
Okta authorization server?
+
Custom OAuth server controlling token issuance.
Okta identity engine?
+
New adaptive authentication platform.
Okta idp discovery?
+
Chooses correct IdP based on user attributes.
Okta inline hook?
+
Extend Okta flows with external logic.
Okta mfa?
+
Multi-step authentication including SMS, Push, TOTP.
Okta org?
+
Dedicated Okta tenant for an organization.
Okta risk-based authentication?
+
Dynamically challenges or blocks based on risk.
Okta sign-on policy?
+
Rules defining how users authenticate to applications.
Okta system log?
+
Audit log for events and authentication attempts.
Okta universal directory?
+
Directory service storing users, groups, and attributes.
Okta verify?
+
Mobile authenticator for push and TOTP.
Okta vs adfs?
+
Okta = cloud SaaS; ADFS = on-prem with heavy infrastructure.
Okta vs pingfederate?
+
Okta = cloud-first; Ping = enterprise customizable federation.
Okta workflow?
+
Automation engine for identity tasks.
Okta?
+
Identity platform supporting SAML SSO.
Okta?
+
Identity and access management provider for cloud applications.
Opaque token?
+
Token that requires introspection to validate.
Openid connect (oidc)?
+
OIDC is an identity layer on top of OAuth 2.0 for authentication, returning an ID token that provides user identity info.
Openid' scope?
+
Mandatory scope to enable OIDC.
Par (pushed authorization request)?
+
Client sends authorization details via a secure POST before redirect.
Par (pushed authorization requests)?
+
Securely sends auth request to IdP before redirect — prevents tampering.
Partial logout?
+
Only some apps logout.
Password credentials grant
+
User provides username/password directly to client; now discouraged due to security risks.
Password vaulting sso?
+
SSO by storing and auto-filling credentials.
Passwordless sso?
+
SSO without passwords using FIDO2/WebAuthn.
Persistent nameid?
+
Long-lived identifier for a user.
Phone' scope?
+
Access to phone and phone_verified.
Pingdirectory?
+
Directory used with PingFederate for user management.
Pingfederate authentication policy?
+
Controls how authentication decisions are made.
Pingfederate connection?
+
Configuration linking SP and IdP.
Pingfederate console?
+
Admin dashboard for configuration.
Pingfederate idp adapter?
+
Plugin to authenticate users (LDAP, Kerberos etc).
Pingfederate oauth as?
+
Acts as authorization server issuing tokens.
Pingfederate vs adfs?
+
Ping = more flexible; ADFS = Microsoft ecosystem-focused.
Pingfederate?
+
Enterprise IdP/SP platform supporting SAML.
Pingfederate?
+
Enterprise federation server for SSO and identity integration.
Pingone?
+
Cloud identity solution integrating with PingFederate.
Pkce extension?
+
Proof Key for Code Exchange — protects public clients.
Pkce introduced?
+
To prevent authorization code interception attacks.
Pkce?
+
Proof Key for Code Exchange; improves security for public clients.
Pkce?
+
Enhances OAuth2 security for public clients.
Policy contract?
+
Defines attributes shared with SP/IdP.
Post_logout_redirect_uri?
+
URL where user is redirected after logout.
Principle of least privilege?
+
Users are granted only the permissions necessary to perform their job functions.
Privileged identity management?
+
Controls and audits privileged roles.
Problem does oauth 2.0 solve?
+
It enables secure delegated access using tokens instead of credentials.
Profile' scope?
+
Access to basic user attributes.
Prohibited in oidc?
+
Tokens through URL (except legacy implicit flow).
Proof-of-possession?
+
Tokens tied to a key so only holder with key can use them.
Protocol does azure ad support?
+
OIDC, OAuth2, SAML2, WS-Fed.
Protocol format does saml use?
+
XML.
Protocol is best for mobile apps?
+
OIDC and OAuth2.
Protocol is best for web apps?
+
SAML2 for enterprises, OIDC for modern apps.
Protocol uses json/jwt?
+
OIDC.
Protocol uses xml?
+
SAML2.
Protocols does adfs support?
+
SAML, WS-Fed, OAuth2, OIDC.
Protocols does okta support?
+
OIDC, OAuth2, SAML2, SCIM.
Protocols does pingfederate support?
+
OIDC, OAuth2, SAML2, WS-Trust.
Protocols support sso?
+
SAML2, OIDC, OAuth2, WS-Fed, Kerberos.
Public client?
+
Cannot securely store secrets — e.g., mobile, SPAs.
Public client?
+
Client without a secure place to store secrets (SPA, mobile app).
Rate limiting in cloud security?
+
Limits the number of requests to APIs or services to prevent abuse and DDoS attacks.
Recipient attribute?
+
SP endpoint expected to receive the assertion.
Redirect uri?
+
Endpoint where authorization server sends tokens or codes.
Redirect_uri?
+
URL where tokens/codes are sent after login.
Redundancy in ha?
+
Duplication of critical components to avoid single points of failure, e.g., multiple servers, networks, or databases.
Refresh token flow?
+
Used to obtain new access tokens silently.
Refresh token grace period?
+
Allows old token to work briefly during rotation.
Refresh token lifetime?
+
Can be days to months based on policy.
Refresh token rotation?
+
Each refresh returns a new token; old one invalidated.
Refresh token?
+
A long-lived token used to obtain new access tokens.
Refresh token?
+
Used to get new access tokens without re-login.
Refresh token?
+
A long-lived token used to get new access tokens.
Refresh tokens long lived?
+
To enable new access tokens without user interaction.
Registration endpoint?
+
Dynamic client registration.
Relationship between oauth2 and oidc?
+
OIDC extends OAuth2 by adding identity features.
Relaystate?
+
State parameter passed between SP and IdP to maintain context.
Relaystate?
+
Parameter that preserves return URL or context.
Relying party trust?
+
Configuration for apps that rely on ADFS for authentication.
Replay attack?
+
Reusing captured tokens.
Replay detected'?
+
Assertion already used before.
Reply/acs url?
+
Endpoint where Azure AD posts SAML responses.
Resource owner password grant (ropc)?
+
User sends username/password directly; insecure and deprecated.
Resource owner?
+
The user or entity owning the protected resource.
Resource server responsibility?
+
Validate tokens and expose APIs.
Resource server?
+
The API hosting the protected resources.
Response_mode?
+
Defines how tokens are returned (query, form_post, fragment).
Response_type?
+
Defines which tokens are returned (code, id_token, token).
Restrict redirect_uri?
+
Prevents token leakage to malicious URLs.
Risk-based authentication?
+
Adaptive authentication based on context.
Risk-based sso?
+
Challenges based on user risk profile.
Ropc flow?
+
Resource Owner Password Credentials — now discouraged.
Ropc used?
+
Legacy or highly trusted systems; not recommended.
Rotate certificates periodically?
+
Prevents long-term compromises.
Rotate secrets regularly?
+
Client secrets should be rotated periodically.
Rp-initiated logout?
+
Client logs the user out at IdP.
Rpo and rto?
+
RPO (Recovery Point Objective): max data loss allowed, RTO (Recovery Time Objective): max downtime allowed during recovery
Saml 2.0?
+
A standard for exchanging authentication and authorization data using XML-based security assertions.
Saml attribute query?
+
SP querying user attributes via SOAP.
Saml authentication flow?
+
SP sends AuthnRequest → IdP authenticates → IdP sends assertion → SP validates → user logged in.
Saml binding?
+
Defines how SAML messages are transported over HTTP.
Saml federation?
+
Allows authentication across organizations.
Saml federation?
+
Establishes trust using SAML metadata.
Saml flow is more secure?
+
SP-initiated SSO due to request ID matching.
Saml in sso?
+
XML-based single sign-on protocol used in enterprises.
Saml is not good for mobile?
+
XML processing is heavy and not designed for mobile flows.
Saml logoutrequest?
+
Request to initiate logout across IdP and SP.
Saml metadata?
+
XML document describing IdP and SP configuration.
Saml profile?
+
Defines use cases like Web SSO, SLO, IdP proxying.
Saml response?
+
XML message containing the SAML assertion.
Saml response?
+
IdP's message containing user identity.
Saml single logout (slo)?
+
Logout from one system logs the user out of all SAML-connected systems.
Saml still used?
+
Strong enterprise adoption and compatibility with legacy systems.
Saml strength?
+
Federated SSO, enterprise security.
Saml weakness?
+
Complexity, XML overhead, slower than OIDC.
Saml?
+
Security Assertion Markup Language (SAML) is an XML-based standard for exchanging authentication and authorization data between an identity provider and service provider.
Scim provisioning in okta?
+
Automatic user account creation/deletion in apps.
Scim provisioning?
+
Automatic provisioning of users to cloud apps.
Scim?
+
Protocol for automated user provisioning.
Scim?
+
Automated user provisioning for SSO apps.
Scope restriction?
+
Limit token permissions to least privilege.
Scope?
+
Defines the level of access requested by the client.
Seamless sso?
+
Automatically signs in users on corporate devices.
Secrets management?
+
Securely stores and manages API keys, passwords, and certificates used by cloud apps and containers.
Security automation with devsecops?
+
Integrating security in CI/CD pipelines to automate scanning, testing, and policy enforcement during development.
Security context?
+
Session stored after validating assertion.
Security group vs network acl?
+
Security group is stateful; network ACL is stateless and applies at subnet level.
Security group?
+
Security Groups act as virtual firewalls in cloud environments to control inbound and outbound traffic for VMs and containers.
Security groups in cloud?
+
Security groups act as virtual firewalls controlling inbound and outbound traffic to cloud resources.
Security information and event management (siem)?
+
SIEM collects analyzes and reports on security events across cloud environments.
Security information and event management (siem)?
+
SIEM collects and analyzes logs in real-time to detect, alert, and respond to security threats.
Separate auth and resource servers?
+
Improves security and scales better.
Serverless security?
+
Serverless security addresses vulnerabilities in functions-as-a-service (FaaS) and managed backend services.
Serverless security?
+
Securing FaaS (Functions as a Service) involves identity policies, least privilege access, and monitoring event triggers.
Service provider (sp)?
+
SP is the application that relies on IdP for authentication and trusts the IdP’s tokens or assertions.
Service provider (sp)?
+
Relies on IdP for authentication.
Service provider (sp)?
+
An application that consumes SAML assertions and grants access.
Session endpoint?
+
Endpoint for session management.
Session federation?
+
Sharing session state across domains.
Session hijacking?
+
Stealing a valid session to impersonate a user.
Session in sso?
+
Stored authentication state allowing continuous access.
Session token vs id token?
+
Session = internal system token; ID token = external identity token.
Session_state?
+
Identifier for user session at IdP.
Shared responsibility model in aws?
+
AWS secures the cloud infrastructure; customers secure their data applications and configurations.
Shared responsibility model in azure?
+
Azure secures physical data centers; customers manage applications data and identity.
Shared responsibility model in gcp?
+
GCP secures the infrastructure; customers secure workloads data and user access.
Shared responsibility model?
+
It defines which security responsibilities belong to the cloud provider and which to the customer.
Should assertions be encrypted?
+
Yes, especially for sensitive data.
Should tokens be short-lived?
+
Reduces impact of compromise.
Signature validation?
+
Checks if signed by trusted IdP.
Signature verification fails?
+
Wrong certificate or XML manipulation.
Silent authentication?
+
Refreshes tokens without user interaction.
Single federation?
+
Using one IdP across multiple apps.
Single logout?
+
Logout from one app logs out from all federated apps.
Single sign-on (sso)?
+
SSO enables users to log in once and access multiple cloud applications without re-authentication.
Sla in cloud?
+
Service Level Agreement defines uptime guarantees, availability, and performance metrics with providers.
Slo is more reliable?
+
Back-channel — avoids browser failures.
Slo may fail?
+
SPs may ignore logout request or session mismatch.
Slo unreliable?
+
Different SP implementations and browser constraints.
Slo?
+
Single Logout — logs user out from all apps.
Slo?
+
Single Logout across all federated apps.
Sni support in adfs?
+
Allows multiple SSL certs on same host.
Soap binding?
+
Used for back-channel communication like logout.
Sp adapter?
+
Adapter to authenticate SP requests.
Sp federation?
+
One SP trusts multiple IdPs.
Sp in sso?
+
Service Provider — application consuming the identity.
Sp metadata url?
+
URL where IdP fetches SP metadata.
Sp?
+
Application that uses IdP authentication.
Sp-initiated sso?
+
Login initiated from Service Provider.
Sp-initiated sso?
+
User starts login at the Service Provider.
Ssl/tls in cloud?
+
SSL/TLS encrypts data in transit, ensuring secure communication between clients and cloud services.
Sso connector?
+
Pre-integrated SSO configuration for apps.
Sso improves identity governance?
+
Yes, ensures consistent user lifecycle management.
Sso in saml?
+
Single Sign-On enabling users to access multiple apps with one login.
Sso needed?
+
It improves user experience and security by eliminating repeated logins.
Sso provider?
+
A platform offering authentication and federation services.
Sso setup complex?
+
Requires certificates, metadata, mappings, and trust configuration.
Sso url?
+
Identity Provider endpoint that handles authentication requests.
Sso with adfs?
+
Supports SAML and WS-Fed for on-prem identity.
Sso with azure ad?
+
Uses SAML, OIDC, OAuth, and Conditional Access.
Sso with okta?
+
Supports SAML, OIDC, SCIM, and rich policy controls.
Sso with pingfederate?
+
Enterprise SSO with SAML, OAuth, and adaptive auth.
Sso?
+
SSO allows users to log in once and access multiple applications without re-entering credentials. It improves UX and security.
Sso?
+
Single sign-on allowing one login for multiple apps.
Sso?
+
Single Sign-On enabling access to multiple apps after one login.
Sso?
+
Single Sign-On allows a user to log in once and access multiple systems without logging in again.
State parameter?
+
Protects against CSRF attacks.
State parameter?
+
Protects against CSRF attacks.
Step-up authentication?
+
Requesting stronger authentication mid-session.
Sts?
+
Security Token Service issuing tokens.
Sub' claim?
+
Subject — unique identifier of the user.
Subjectconfirmationdata?
+
Contains conditions like recipient and expiration.
Surface controllers?
+
Surface controllers handle form submissions and page interactions in MVC views for Umbraco sites.
Tenant in azure ad?
+
A dedicated Azure AD instance for an organization.
Test slo compatibility?
+
Different SPs/IdPs implement SLO inconsistently.
Tls required for oidc?
+
Prevents token interception.
To check adfs logs?
+
Use Event Viewer under ADFS Admin logs.
To export metadata?
+
Access /FederationMetadata/2007-06/FederationMetadata.xml.
To extend umbraco functionality?
+
Use custom controllers, property editors, surface controllers, or packages.
To handle jwt expiration?
+
Use short-lived access tokens and refresh tokens to renew them without re-authentication.
To implement role-based authorization with jwt?
+
Include roles in JWT claims and validate in the application to allow/deny access to resources.
To implement sso with umbraco?
+
Integrate with SAML/OIDC provider; configure Umbraco to trust the IdP, enabling centralized authentication.
To integrate oauth with umbraco?
+
Use OAuth packages or middleware to enable login with third-party providers. Tokens are verified in the Umbraco back-office.
To integrate oauth/jwt in angular or react with umbraco backend?
+
Frontend requests token via OAuth flow; backend validates JWT before serving content or API data.
To prevent replay attacks?
+
Use PoP tokens or nonce/PKCE mechanisms.
To prevent replay attacks?
+
Use timestamps, one-time use, and session validation.
To prevent replay attacks?
+
Use timestamps, nonce, and audience restrictions.
To prevent session hijacking?
+
Use secure cookies, TLS, and short sessions.
To prevent token hijacking?
+
Use HTTPS, short-lived tokens, PKCE, and secure storage.
To refresh jwt tokens?
+
Use refresh tokens to request a new access token without re-authentication. Implement server-side validation for security.
To revoke jwt tokens?
+
Maintain a blacklist or short-lived tokens; revoke by invalidating refresh tokens.
To secure microservices with jwt?
+
Each microservice validates the token signature, expiry, and claims, ensuring stateless and secure access.
To secure umbraco back-office?
+
Enable HTTPS, enforce strong passwords, MFA, and assign roles/permissions to users.
To store access tokens?
+
Secure storage: keychain, secure enclave, or encrypted storage.
To update token-signing certificates?
+
Auto-rollover or manual certificate update.
Token accesses apis?
+
Access Token.
Token binding?
+
Binding tokens to TLS keys; prevents misuse.
Token binding?
+
Binds tokens to client to prevent misuse.
Token chaining?
+
Passing tokens between multiple services.
Token decryption certificate?
+
Certificate used to decrypt incoming tokens.
Token encryption?
+
Encrypts token contents for confidentiality.
Token endpoint?
+
Used to exchange authorization code for tokens.
Token exchange?
+
Exchange one token for another with different scopes.
Token exchange?
+
Exchanging one token for another under OIDC/OAuth2.
Token expiration?
+
Tokens expire after a predefined time to limit misuse.
Token expiration?
+
Tokens become invalid after time limit.
Token formats does okta issue?
+
JWT-based ID, access, refresh tokens.
Token hashing?
+
Hashing codes or values to prevent leakage.
Token hashing?
+
Hash embedded in ID Token to confirm token integrity.
Token hijacking?
+
Stealing tokens to impersonate users.
Token introspection?
+
Endpoint to check token validity.
Token introspection?
+
Checks validity of OAuth access tokens.
Token introspection?
+
Endpoint used to validate opaque tokens.
Token lifetime policy?
+
Rules controlling validity of issued tokens.
Token proves authentication?
+
ID Token.
Token renewal?
+
Extending session without login.
Token replay attack?
+
Attacker reuses a captured token to gain access.
Token replay attack?
+
Reusing a stolen assertion to impersonate a user.
Token revocation?
+
Invalidating a token before it expires.
Token revocation?
+
Endpoint to revoke refresh or access tokens.
Token scope?
+
Permissions embedded in the token.
Token signing certificate?
+
Certificate used to sign SAML assertions.
Token signing key?
+
Key used to sign JWT tokens.
Token signing?
+
Cryptographically signing tokens to prevent tampering.
Token types adfs issues?
+
SAML tokens, JWT tokens in OAuth/OIDC.
Token types does azure ad issue?
+
Access token, ID token, Refresh token.
Tokenization?
+
Tokenization replaces sensitive data with unique identifiers (tokens) to reduce exposure.
Transient nameid?
+
Short-lived identifier used once per session.
Transport does saml commonly use?
+
HTTP Redirect, HTTP POST, HTTP Artifact.
Trust establishment?
+
Exchange of metadata and certificates.
Types of grants
+
Authorization Code, Client Credentials, Password Credentials, Refresh Token, and Implicit (deprecated).
Types of groups exist?
+
Directory groups, imported groups, application groups.
Types of oidc clients?
+
Public and confidential clients.
Types of pingfederate connections?
+
SP connections, IdP connections.
Types of saml assertions?
+
Authentication, Authorization Decision, Attribute.
Types of slo?
+
Front-channel and back-channel.
Types of sso does azure ad support?
+
SAML, OIDC, OAuth, Password-based SSO.
Types of sso does okta support?
+
SAML, OIDC, password vaulting.
Umbraco content service?
+
Content Service API allows CRUD operations on content nodes programmatically.
Unsolicited response?
+
IdP-initiated response not tied to AuthnRequest.
Url of oidc discovery?
+
/.well-known/openid-configuration.
Use artifact binding?
+
More secure, avoids sending assertion through browser.
Use https always?
+
Yes, required for OAuth to avoid token leakage.
Use https everywhere?
+
Required for secure SAML transmission.
Use https in sso?
+
Protects token transport.
Use ip restrictions?
+
Adds another protection layer.
Use long-lived refresh tokens?
+
Only with rotation and revocation.
Use oidc over saml?
+
For mobile, SPAs, APIs, and modern cloud systems.
Use pkce for public clients?
+
Always.
Use rate limiting?
+
Avoid abuse of authorization endpoints.
Use refresh token rotation?
+
Prevents stolen refresh tokens from being reused.
Use saml over oidc?
+
For enterprise SSO with legacy systems.
Use secure token storage?
+
Use OS-protected key stores.
Use short assertion lifetimes?
+
Mitigates replay risk.
Use short-lived access tokens?
+
Recommended for security and performance.
Use transient nameid?
+
Enhances privacy by avoiding long-term IDs.
Userinfo endpoint?
+
Returns user profile attributes.
Userinfo signature?
+
Signed UserInfo responses for extra security.
Validate audience restrictions?
+
Ensures assertion is meant for the SP.
Validate audience?
+
Ensures token is intended for the client.
Validate expiration?
+
Prevents using expired tokens.
Validate issuer and audience?
+
Must be validated on every API call.
Validate issuer?
+
Ensures token is from trusted identity provider.
Validate redirect uris?
+
Required to prevent redirects to malicious sites.
Validate timestamps?
+
Prevents replay attacks.
Virtual private cloud (vpc)?
+
VPC is an isolated cloud network with controlled access to resources.
Virtual private cloud (vpc)?
+
A VPC isolates cloud resources in a private network, controlling routing, subnets, and security policies.
Wap pre-authentication?
+
Validates user before forwarding to backend server.
X.509 certificate used for in saml?
+
To sign and encrypt assertions.
Xml encryption?
+
Encrypts assertion contents for confidentiality.
Xml signature?
+
Cryptographic signing of SAML assertions.
You configure claim rules?
+
Using rule templates or custom claims transformation.
You configure sp-initiated sso?
+
Enable SAML integration with proper ACS and Entity ID.
You deploy pingfederate?
+
On-prem VM, container, or cloud VM.
Zero downtime deployment?
+
Deploying updates without interrupting service by blue-green or rolling deployment strategies.
Zero trust security?
+
Zero trust assumes no implicit trust; all users and devices must be verified before accessing resources.
Zero-trust security?
+
Zero-trust assumes no implicit trust. Every request must be verified regardless of origin or location.

Azure Functions

Azure Functions?
+
Serverless compute service to run event-driven code. Charges based on execution time and resources.
Cold start in Azure Functions?
+
Delay when a function is triggered after idle. Mitigated using Premium Plan or Always On.
DiffBet Function App and Function?
+
Function App is the container for multiple functions sharing runtime and configuration. Functions are individual tasks.
Durable Function?
+
Extension to Functions for stateful, orchestrated workflows over long-running processes.
Hosting plans for Azure Functions?
+
Consumption Plan (serverless), Premium Plan (pre-warmed instances), Dedicated (App Service Plan).
Input and output binding in Functions?
+
Bindings simplify connecting functions to external services (storage, queues, DBs) without explicit code.
Languages are supported in Azure Functions?
+
C#, JavaScript, Python, Java, PowerShell, TypeScript, and custom handlers.
Monitor Azure Functions?
+
Use Application Insights to track execution, failures, performance, and logs.
Secure Azure Functions?
+
Use API keys, OAuth, managed identities, or Azure AD integration.
Triggers Azure Functions?
+
HTTP requests, timers, Blob storage changes, Service Bus, Event Hubs, and Cosmos DB triggers.

Azure DevOps (Azure Pipelines)

+
Agent pool?
+
A collection of machines where pipeline jobs are executed.
Artifacts in Azure DevOps?
+
Build outputs stored for deployment, sharing, or consumption in releases.
Azure DevOps?
+
A cloud-based DevOps platform with boards, repos, pipelines, artifacts, and test plans.
Azure Pipelines?
+
CI/CD service in Azure DevOps for building, testing, and deploying applications.
DiffBet Classic and YAML pipelines?
+
Classic uses a visual editor; YAML pipelines are code-based and versioned in the repo.
Handle secrets in Azure Pipelines?
+
Use Azure Key Vault integration or pipeline variables marked as secret.
Release pipeline?
+
Defines deployment to multiple environments with approvals, gates, and artifact consumption.
Schedule pipelines in Azure DevOps?
+
Use triggers like scheduled pipelines with CRON expressions.
Stages in Azure Pipelines?
+
Logical phases like Build, Test, and Deploy, which contain jobs and tasks.
Task in Azure Pipeline?
+
Predefined operations like build, deploy, test, or script execution within a job.

Azure DevOps

+
Azure Artifacts?
+
A repository for packages like NuGet, npm, or Maven, enabling sharing and versioning of artifacts in DevOps pipelines.
Azure Boards?
+
Azure Boards provide work item tracking, Kanban boards, sprints, and backlog management for Agile project planning.
Azure DevOps?
+
Azure DevOps is a Microsoft platform for CI/CD, project management, source control, and testing pipelines. Supports Boards, Repos, Pipelines, Artifacts, and Test Plans.
Azure Pipelines?
+
Azure Pipelines enable CI/CD automation for building, testing, and deploying applications across multiple environments.
Azure Repos?
+
Azure Repos provides Git or TFVC repositories for source control and versioning.
DiffBet Azure DevOps Services and Server?
+
Services is cloud-hosted (SaaS), Server is on-premise. Services updates automatically; Server requires manual upgrades.
DiffBet build and release pipelines?
+
Build pipeline compiles code, runs tests, and produces artifacts. Release pipeline deploys artifacts to environments.
Implement CI/CD in Azure DevOps?
+
Push code → build pipeline triggers → run tests → publish artifacts → release pipeline deploys to target environments.
Manage permissions in Azure DevOps?
+
Use security groups, role-based access, and project-level permissions to control access to boards, repos, and pipelines.
YAML in Azure Pipelines?
+
YAML defines pipeline stages, jobs, and tasks in a text file that can be versioned with source control.

Azure Functions

+
Azure Functions?
+
Serverless compute service to run event-driven code. Charges based on execution time and resources.
Cold start in Azure Functions?
+
Delay when a function is triggered after idle. Mitigated using Premium Plan or Always On.
DiffBet Function App and Function?
+
Function App is the container for multiple functions sharing runtime and configuration. Functions are individual tasks.
Durable Function?
+
Extension to Functions for stateful, orchestrated workflows over long-running processes.
Hosting plans for Azure Functions?
+
Consumption Plan (serverless), Premium Plan (pre-warmed instances), Dedicated (App Service Plan).
Input and output binding in Functions?
+
Bindings simplify connecting functions to external services (storage, queues, DBs) without explicit code.
Languages are supported in Azure Functions?
+
C#, JavaScript, Python, Java, PowerShell, TypeScript, and custom handlers.
Monitor Azure Functions?
+
Use Application Insights to track execution, failures, performance, and logs.
Secure Azure Functions?
+
Use API keys, OAuth, managed identities, or Azure AD integration.
Triggers Azure Functions?
+
HTTP requests, timers, Blob storage changes, Service Bus, Event Hubs, and Cosmos DB triggers.

Azure Key Vault

+
Access Key Vault from code?
+
Use Azure SDK, REST API, or managed identity for authentication.
Azure Key Vault?
+
Cloud service to securely store secrets, keys, and certificates. Helps centralize and manage sensitive information.
Backup and restore Key Vault?
+
Azure provides APIs and PowerShell commands to backup keys, secrets, and certificates and restore in another vault.
Control access to Key Vault?
+
Use Access Policies or Azure RBAC to assign read/write permissions to users or services.
DiffBet Function App Plan types?
+
Consumption: serverless, auto-scale, pay per execution. Premium: pre-warmed instances, VNET support. Dedicated: fixed resources, always on.
DiffBet secrets and keys?
+
Secrets store sensitive info (passwords), keys are for cryptographic operations (encryption, signing).
DiffBet soft-delete and purge protection?
+
Soft-delete allows recovery of deleted objects. Purge protection prevents permanent deletion until explicitly disabled.
DiffBet Standard and Premium tiers in Service Bus?
+
Premium provides dedicated resources, higher throughput, low latency, and advanced features like sessions and transactions.
DiffBet topics and queues in Service Bus?
+
Queues are one-to-one messaging; topics allow one-to-many messaging via subscriptions.
Ensure message ordering in Service Bus?
+
Use message sessions or partitioned queues to maintain FIFO processing.
Integrate Service Bus with Azure Functions?
+
Use Service Bus trigger in Functions to automatically execute code when a message arrives in queue/topic.
Key Vault improves security in cloud applications?
+
Centralized secrets management, reduces hardcoding credentials, integrates with managed identities, and ensures compliance.
Managed Identity with Key Vault?
+
Enables secure access from Azure resources without storing credentials in code.
monitor Key Vault access?
+
Enable diagnostic logs to Azure Monitor or Event Hub for auditing access and usage.
Multiple functions share a Key Vault?
+
Yes, multiple Function Apps can access the same Key Vault via managed identities.
Objects can be stored in Key Vault?
+
Secrets (passwords), Keys (encryption), Certificates (SSL/TLS).
Purpose of Key Vault in DevOps pipelines?
+
Securely inject secrets, certificates, and keys into CI/CD pipelines without exposing credentials.
Rotate secrets in Key Vault?
+
Use automatic or manual rotation to periodically update keys/secrets without downtime.
Scale Azure App Service?
+
Scale up (bigger instance) or scale out (more instances). Autoscale can respond to CPU/memory metrics.
soft-delete in Key Vault?
+
Allows recovery of deleted secrets/keys for a retention period (default 90 days).

Azure Repos

+
Pull Requests in Azure Repos?
+
They enable code review and enforce branch policies before merging code into protected branches.
Azure Repos?
+
Azure Repos is part of Azure DevOps providing Git repositories and TFVC (Team Foundation Version Control) for collaborative development.
Branch policy in Azure Repos?
+
Policies enforce code quality, mandatory reviews, builds, and checks before merging into protected branches.
Branching strategy?
+
Defines rules for feature, release, hotfix, and main branches to ensure clean development workflow (e.g., GitFlow, trunk-based).
Create a repo in Azure Repos?
+
Azure DevOps → Repos → New repository → Git or TFVC → Initialize with README → Create.
DiffBet Azure Repos and GitHub?
+
Azure Repos integrates tightly with Azure DevOps pipelines and boards, while GitHub is more widely used for public repos and community collaboration.
DiffBet Git and TFVC in Azure Repos?
+
Git is distributed VCS; TFVC is centralized. Git supports branching/merging; TFVC uses workspace checkouts.
DiffBet GitHub, GitLab, Bitbucket, Azure Repos?
+
All host Git repos. GitHub focuses on public collaboration, GitLab on DevOps lifecycle, Bitbucket integrates with Jira, Azure Repos integrates with Azure DevOps ecosystem.
Enforce branch policies?
+
Use required reviewers, build validations, and limit who can merge.
Handle merge conflicts in multi-developer environment?
+
Use feature branches, PRs/MRs, communicate changes, and resolve conflicts manually when they arise.
Integrate Azure Repos with CI/CD?
+
Connect with Azure Pipelines to automatically build, test, and deploy on push or PR events.
Integrate Azure Repos with pipelines?
+
Link repo to Azure Pipelines and trigger CI/CD pipelines on push or PR events.
Manage secrets in CI/CD pipelines?
+
Use GitHub secrets, GitLab CI variables, Bitbucket secured variables, or Azure Key Vault.
Monitor repository activity?
+
Use webhooks, built-in analytics, CI/CD logs, audit logs, or integration tools like SonarQube for code quality monitoring.
Rollback a PR in Azure Repos?
+
Revert the merged PR using the revert button or manually revert commits.

Azure Service Bus

+
Azure Service Bus?
+
A messaging platform for asynchronous communication between services using queues and topics.
Dead-letter queues (DLQ)?
+
Sub-queues to store messages that cannot be delivered or processed. Helps error handling and retries.
DiffBet Service Bus and Storage Queue?
+
Service Bus supports advanced messaging features (pub/sub, sessions, DLQ), Storage Queue is simpler and cost-effective.
Duplicate detection?
+
Service Bus can detect and ignore duplicate messages based on MessageId within a defined time window.
Enable auto-forwarding?
+
Forward messages from one queue/subscription to another automatically for workflow chaining.
Message lock duration?
+
Time a message is locked for processing. Prevents multiple consumers from processing simultaneously.
Message session in Service Bus?
+
Used to group related messages for ordered processing by the same consumer.
Peek-lock?
+
Locks the message while reading but does not delete it until explicitly completed.
Queue in Service Bus?
+
FIFO message storage where one consumer reads messages at a time.
Topic and Subscription?
+
Topics allow multiple subscribers to receive copies of a message. Useful for pub/sub patterns.

Bitbucket

+
Bitbucket api?
+
Bitbucket API allows programmatic access to repositories pipelines pull requests and other resources.
Bitbucket app password?
+
App password allows authentication for API or Git operations without using your main password.
Bitbucket artifacts in pipelines?
+
Artifacts are files produced by steps that can be used in later steps or downloads.
Bitbucket branch model?
+
Branch model defines naming conventions and workflow for feature release and hotfix branches.
Bitbucket branch permission?
+
Branch permission restricts who can push merge or delete on specific branches.
Bitbucket build status?
+
Build status shows pipeline or CI/CD success/failure associated with commits or pull requests.
Bitbucket caches in pipelines?
+
Caches store dependencies between builds to speed up pipeline execution.
Bitbucket cloud?
+
Bitbucket Cloud is a SaaS version hosted by Atlassian accessible via web browser without local server setup.
Bitbucket code insights?
+
Code Insights provides annotations reports and automated feedback in pull requests.
Bitbucket code review?
+
Code review is the process of inspecting code changes before merging.
Bitbucket code search?
+
Code search allows searching for keywords across repositories and branches.
Bitbucket commit hook?
+
Commit hook triggers scripts on commit events to enforce rules or automation.
Bitbucket commit?
+
A commit is a snapshot of changes in the repository with a unique identifier.
Bitbucket compare feature?
+
Compare shows differences between branches commits or tags.
Bitbucket custom pipeline?
+
Custom pipeline is manually triggered or triggered by specific branches tags or events.
Bitbucket default branch?
+
Default branch is the primary branch where new changes are merged usually main or master.
Bitbucket default pipeline?
+
Default pipeline is automatically triggered for all branches unless overridden.
Bitbucket default reviewers?
+
Default reviewers are users automatically added to pull requests for code review.
Bitbucket default reviewers?
+
Default reviewers are automatically added to pull requests for code review.
Bitbucket deployment environment?
+
Deployment environment represents a target system like development staging or production.
Bitbucket deployment permissions?
+
Deployment permissions control who can deploy to specific environments.
Bitbucket deployment tracking?
+
Deployment tracking shows which commit was deployed to which environment.
Bitbucket emoji reactions?
+
Emoji reactions allow quick feedback on pull request comments.
Bitbucket environment variables?
+
Environment variables store configuration values used in pipelines.
Bitbucket forking workflow?
+
Forking workflow involves creating a fork making changes and submitting a pull request to the original repository.
Bitbucket inline discussions?
+
Inline discussions allow commenting on specific lines in pull requests.
Bitbucket integration with jira?
+
Integration links commits branches and pull requests to Jira issues for traceability.
Bitbucket issue tracker integration?
+
Integration links repository commits branches or pull requests to issues for tracking.
Bitbucket issue tracker?
+
Issue tracker helps manage tasks bugs and feature requests within a repository.
Bitbucket merge check requiring successful build?
+
This ensures pipelines pass before a pull request can be merged.
Bitbucket merge check?
+
Merge check ensures conditions like passing pipelines approvals or no conflicts before merging.
Bitbucket merge conflict?
+
Merge conflict occurs when changes in different branches conflict and cannot be merged automatically.
Bitbucket merge permissions?
+
Merge permissions restrict who can merge pull requests into a branch.
Bitbucket merge strategy?
+
Merge strategy determines how branches are combined: merge commit squash or fast-forward.
Bitbucket pipeline caching?
+
Caching stores files like dependencies between builds to improve speed.
Bitbucket pipeline step?
+
Step defines an individual task in a pipeline such as build test or deploy.
Bitbucket pipeline trigger?
+
Trigger defines events that start a pipeline like push pull request or schedule.
Bitbucket pipeline?
+
Bitbucket Pipeline is an integrated CI/CD service for building testing and deploying code automatically.
Bitbucket pipeline?
+
It’s a CI/CD tool integrated with Bitbucket. It automates build, test, and deployment processes using a bitbucket-pipelines.yml file.
Bitbucket post-receive hook?
+
Post-receive hook runs after push to notify or trigger workflows.
Bitbucket pre-receive hook?
+
Pre-receive hook runs on the server before accepting pushed changes.
Bitbucket pull request approvals?
+
Approvals are confirmations from reviewers before merging pull requests.
Bitbucket pull request comment?
+
Comment allows discussion or feedback on code changes in pull requests.
Bitbucket pull request inline comment?
+
Inline comment is attached to a specific line in a file within a pull request.
Bitbucket pull request merge button?
+
Merge button merges the pull request once all conditions are met.
Bitbucket pull request merge conflicts?
+
Merge conflicts occur when changes in branches are incompatible.
Bitbucket pull request merge strategies?
+
Merge strategies: merge commit squash or fast-forward.
Bitbucket pull request tasks?
+
Tasks are action items within pull requests for reviewers or authors to complete.
Bitbucket release management?
+
Release management tracks versions tags and deployment history.
Bitbucket repository fork vs clone?
+
Fork creates remote copy for independent development; clone copies repository locally.
Bitbucket repository forking limit?
+
Cloud repositories can have unlimited forks; limits may apply in Server based on configuration.
Bitbucket repository hook?
+
Repository hook is a script triggered by repository events like commits or pull requests.
Bitbucket repository mirroring?
+
Repository mirroring synchronizes changes between two repositories.
Bitbucket repository permissions inheritance?
+
Permissions can be inherited from project-level to repository-level for consistent access.
Bitbucket repository size limit?
+
Bitbucket Cloud repository limit is 2 GB for free plan; Server can be configured based on hardware.
Bitbucket repository watchers vs default reviewers?
+
Watchers receive notifications; default reviewers are added to pull requests automatically.
Bitbucket repository watchers?
+
Watchers receive notifications about repository activity.
Bitbucket repository?
+
A repository is a storage space on Bitbucket where your project’s code history and collaboration features are managed.
Bitbucket rest api?
+
REST API allows programmatic access to Bitbucket resources for automation and integrations.
Bitbucket server (data center)?
+
Bitbucket Server is a self-hosted solution for enterprises to manage Git repositories internally.
Bitbucket smart mirroring?
+
Smart mirroring improves clone and fetch speed by using geographically closer mirrors.
Bitbucket snippet permissions?
+
Snippet permissions control who can view or edit code snippets.
Bitbucket snippet?
+
Snippet is a way to share small pieces of code or text with others independent of repositories.
Bitbucket ssh key?
+
SSH key is used for secure authentication between local machine and repository.
Bitbucket tag?
+
Tag marks a specific commit in the repository often used for releases.
Bitbucket tags vs branches?
+
Tags mark specific points; branches are active development lines.
Bitbucket user groups?
+
User groups allow managing access permissions for multiple users collectively.
Bitbucket workspace?
+
Workspace is a container for repositories users and projects in Bitbucket Cloud.
Bitbucket?
+
Bitbucket is a web-based platform for hosting Git and Mercurial repositories providing source code management and collaboration tools.
Bitbucket?
+
Bitbucket is a Git-based repository hosting service by Atlassian. It supports Git and Mercurial, pull requests, branch permissions, and integrates with Jira and CI/CD pipelines.
Branch in bitbucket?
+
A branch is a parallel version of a repository used to develop features fix bugs or experiment without affecting the main codebase.
Diffbet bitbucket and github?
+
Bitbucket supports both Git and Mercurial offers free private repositories and integrates well with Atlassian tools; GitHub focuses on Git and public repositories with a strong open-source community.
Diffbet bitbucket cloud and server pipelines?
+
Cloud pipelines are hosted in Bitbucket’s environment; Server pipelines are run on self-hosted infrastructure.
Diffbet bitbucket pull request approval and merge check?
+
Approval indicates reviewers’ consent; merge check enforces rules before allowing a merge.
Diffbet bitbucket rest api and webhooks?
+
REST API is used for querying and managing resources; webhooks push event notifications to external systems.
Diffbet branch permissions and user permissions in bitbucket?
+
Branch permissions restrict actions on specific branches; user permissions control overall repository access.
Diffbet commit and push in bitbucket?
+
Commit saves changes locally; push uploads commits to remote repository.
Diffbet environment and branch in bitbucket?
+
Branch is a code version; environment is a deployment target.
Diffbet fork and clone in bitbucket?
+
Fork creates a separate remote repository; clone copies a repository to your local machine.
Diffbet git and bitbucket?
+
Git is a version control system, while Bitbucket is a hosting service for Git repositories with collaboration features like PRs, pipelines, and access controls.
Diffbet git and mercurial in bitbucket?
+
Both are distributed version control systems; Git is more widely used and flexible Mercurial is simpler with easier workflows.
Diffbet git clone and bitbucket clone?
+
Git clone is a Git command for local copies; Bitbucket clone often refers to cloning repositories hosted on Bitbucket.
Diffbet https and ssh in bitbucket?
+
HTTPS requires username/password or app password; SSH uses public-private key pairs.
Diffbet lightweight and annotated tags in bitbucket?
+
Lightweight tag is just a pointer; annotated tag includes metadata like author date and message.
Diffbet manual and automatic merging in bitbucket?
+
Manual merging requires user action; automatic merging merges once all checks and approvals pass.
Diffbet manual and automatic triggers in bitbucket?
+
Manual triggers require user action; automatic triggers run based on configured events.
Diffbet master and main in bitbucket?
+
Main is the modern default branch name; master is the legacy default branch name.
Diffbet merge and pull request?
+
Merge is the action of combining code; pull request is the workflow for review and discussion before merging.
Diffbet merge checks and branch permissions?
+
Merge checks enforce conditions for pull requests; branch permissions restrict direct actions on branches.
Diffbet mirror and fork in bitbucket?
+
Mirror replicates a repository; fork creates an independent copy for development.
Diffbet pipeline step and pipeline?
+
Pipeline is a sequence of steps; step is a single unit within the pipeline.
Diffbet project and repository in bitbucket?
+
Project groups multiple repositories; repository stores the actual code and history.
Diffbet read
+
write and admin access in Bitbucket? Read allows viewing code write allows pushing changes admin allows full control including settings and permissions.
Diffbet rebase and merge in bitbucket?
+
Rebase applies commits on top of base branch for linear history; merge combines branches preserving commit history.
Diffbet repository and project permissions in bitbucket?
+
Repository permissions control access to a specific repository; project permissions control access to all repositories under a project.
Fast-forward merge in bitbucket?
+
Fast-forward merge moves the branch pointer forward when there are no divergent commits.
Fork in bitbucket?
+
A fork is a copy of a repository in your account to make changes independently before submitting a pull request.
Merge in bitbucket?
+
Merge combines changes from one branch into another typically after code review.
Pull request in bitbucket?
+
Pull request is a mechanism to propose code changes from one branch to another with review and approval workflow.
Pull requests in bitbucket?
+
A pull request (PR) lets developers propose code changes for review before merging into main branches. It ensures code quality and collaboration.
Squash merge in bitbucket?
+
Squash merge combines multiple commits into a single commit before merging into the target branch.
To create a repository in bitbucket?
+
Login → Click Create repository → Provide name, description, access type → Initialize with README (optional) → Create.
To resolve merge conflicts in bitbucket cloud?
+
Fetch the branch resolve conflicts locally commit and push to the pull request branch.
Webhook in bitbucket?
+
Webhook allows Bitbucket to send event notifications to external systems or services automatically.
Yaml in bitbucket pipelines?
+
YAML file defines pipeline configuration including steps triggers and deployment environments.
You resolve merge conflicts in bitbucket?
+
Resolve conflicts locally in Git commit the changes and push to the branch.

CI/CD

+
A/b testing in ci/cd?
+
A/B testing compares two versions of an application to evaluate performance or user engagement.
Ansible in ci/cd?
+
Ansible automates configuration management provisioning and application deployment.
Artifact promotion?
+
Artifact promotion moves build artifacts from development or staging to production environments.
Artifact repository?
+
Artifact repository stores build outputs libraries and packages for reuse such as Nexus or Artifactory.
Artifact repository?
+
Central storage for build outputs like binaries, Docker images, or NuGet packages.
Automated deployment in ci/cd?
+
Automated deployment delivers application changes to environments without manual intervention.
Automated testing in ci/cd?
+
Automated testing runs tests automatically to validate code functionality and quality during CI/CD pipelines.
Azure devops pipelines?
+
Azure DevOps Pipelines automates builds tests and deployments in Azure DevOps environment.
Benefits of ci/cd?
+
Faster delivery improved code quality early bug detection reduced integration issues and automated workflows.
Bitbucket pipelines?
+
Bitbucket Pipelines is a CI/CD service integrated with Bitbucket repositories for automated builds tests and deployments.
Blue-green deployment?
+
Blue-green deployment switches traffic between two identical environments to minimize downtime during release.
Blue-green deployment?
+
Deploy a new version in parallel with the old one and switch traffic once validated.
Build artifact?
+
Build artifact is the output of a build process such as compiled binaries Docker images or packages.
Build in ci/cd?
+
A build compiles source code into executable artifacts often including dependency resolution and packaging.
Build matrix?
+
Build matrix runs pipeline jobs across multiple environments configurations or versions.
Build pipeline stage?
+
A stage in a pipeline represents a major step such as build test or deploy.
Build trigger?
+
Build trigger automatically starts a pipeline based on events like commit merge request or schedule.
Canary deployment?
+
Canary deployment releases new changes to a small subset of users to monitor behavior before full rollout.
Canary deployment?
+
Deploy a new version to a small subset of users to monitor stability before full release.
Canary monitoring?
+
Canary monitoring observes new releases for errors or performance issues before full rollout.
Chef in ci/cd?
+
Chef is an automation tool for managing infrastructure and deployments.
Ci/cd best practice?
+
Best practices include version control automated tests code review fast feedback secure secrets and monitoring.
Ci/cd metrics?
+
CI/CD metrics track build duration success rate deployment frequency mean time to recovery and failure rate.
Ci/cd pipeline?
+
A CI/CD pipeline is an automated sequence of stages that code goes through from commit to deployment.
Ci/cd security?
+
CI/CD security ensures secure code pipeline configuration secrets management and deployment.
Ci/cd?
+
CI/CD stands for Continuous Integration and Continuous Deployment/Delivery a practice to automate software development testing and deployment.
Ci/cd?
+
CI (Continuous Integration) automatically builds and tests code on commit. CD (Continuous Deployment/Delivery) deploys it to staging or production.
Circleci?
+
CircleCI is a cloud-based CI/CD platform that automates build test and deployment workflows.
Code quality analysis in ci/cd?
+
Code quality analysis checks code for bugs vulnerabilities style and maintainability using tools like SonarQube.
Configuration file in ci/cd?
+
Configuration file defines the pipeline steps environment variables triggers and deployment settings.
Containerization in ci/cd?
+
Containerization packages software and dependencies into a portable container often using Docker.
Continuous delivery (cd)?
+
CD is the practice of automatically preparing code changes for release to production.
Continuous deployment?
+
Continuous Deployment automatically deploys code changes to production after passing tests without manual intervention.
Continuous integration (ci)?
+
CI is the practice of frequently integrating code changes into a shared repository with automated builds and tests.
Continuous monitoring in ci/cd?
+
Continuous monitoring tracks application performance errors and metrics post-deployment.
Dependency management in ci/cd?
+
Dependency management ensures required libraries packages and modules are available during builds and deployments.
Deployment frequency?
+
Deployment frequency measures how often software changes are deployed to production.
Deployment pipeline?
+
Deployment pipeline automates the process of delivering software to different environments like dev test and production.
Devops?
+
DevOps is a culture and set of practices combining development and operations to deliver software faster and reliably.
Diffbet a pipeline and a workflow?
+
Pipeline refers to a sequence of automated steps; workflow includes branching approvals and manual triggers in CI/CD.
Diffbet ci and cd?
+
CI (Continuous Integration) merges code frequently, builds, and tests automatically., CD (Continuous Delivery/Deployment) deploys tested code to environments automatically.
Diffbet ci and nightly builds?
+
CI triggers builds on each commit; nightly builds run at scheduled times typically once per day.
Diffbet ci/cd and devops?
+
CI/CD is a subset of DevOps practices focused on automation; DevOps includes culture collaboration and infrastructure practices.
Diffbet continuous delivery and continuous deployment?
+
Continuous Delivery requires manual approval for deployment; Continuous Deployment is fully automated to production.
Diffbet declarative and scripted jenkins pipelines?
+
Declarative pipelines use a structured readable syntax; scripted pipelines use Groovy scripts with more flexibility.
Docker in ci/cd?
+
Docker is a platform to build ship and run applications in containers.
Dynamic code analysis in ci/cd?
+
Dynamic code analysis inspects running code to detect runtime errors or performance issues.
Feature branching in ci/cd?
+
Feature branching involves developing new features in isolated branches to prevent conflicts in the main branch.
Fork vs clone?
+
Fork is a copy on the server; clone is a local copy of a repo. Fork enables collaboration via PRs.
Gitlab ci file?
+
.gitlab-ci.yml defines GitLab CI/CD pipeline stages jobs and configurations.
Gitlab ci/cd pipeline?
+
Pipeline defines jobs, stages, and scripts to automate build, test, and deploy.
Gitlab ci/cd?
+
GitLab CI/CD is a tool integrated with GitLab for automating builds tests and deployments.
Gitlab runner?
+
A GitLab runner executes CI/CD jobs defined in GitLab pipelines.
Immutable infrastructure in ci/cd?
+
Immutable infrastructure involves replacing servers or environments rather than modifying them.
Infrastructure as code (iac)?
+
IaC automates infrastructure provisioning using code such as Terraform or Ansible.
Integration test?
+
Integration test checks the interaction between multiple components or systems.
Is ci/cd implemented in azure repos?
+
Using Azure Pipelines linked to repos, automatically triggering builds and deployments.
Is ci/cd implemented in bitbucket?
+
Using Bitbucket Pipelines defined in bitbucket-pipelines.yml.
Is ci/cd implemented in github?
+
Using GitHub Actions defined in .yml workflows, triggered on push, PR, or schedule.
Is ci/cd implemented in gitlab?
+
Using .gitlab-ci.yml and GitLab Runners to automate builds, tests, and deployments.
Jenkins job?
+
A Jenkins job defines tasks such as build test or deploy within a CI/CD pipeline.
Jenkins pipeline?
+
Jenkins pipeline is a set of instructions defining the stages and steps for automated build test and deployment.
Jenkins?
+
Jenkins is an open-source automation server used for building testing and deploying software in CI/CD pipelines.
Jenkinsfile?
+
Jenkinsfile defines a Jenkins pipeline using code specifying stages steps and agents.
Key components of ci/cd pipeline?
+
Source code management build automation automated testing artifact management deployment automation monitoring.
Kubernetes in ci/cd?
+
Kubernetes is a container orchestration platform used to deploy scale and manage containers in CI/CD pipelines.
Lead time for changes?
+
Lead time measures the duration from code commit to deployment in production.
Manual trigger?
+
Manual trigger requires user action to start a pipeline or deploy a release.
Mean time to recovery (mttr)?
+
MTTR measures the average time to recover from failures in deployment or production.
Pipeline approval?
+
Pipeline approval requires manual authorization before proceeding to deployment stages.
Pipeline artifact vs build artifact?
+
Pipeline artifacts are shared between jobs/stages; build artifacts are outputs of a single build.
Pipeline artifact?
+
Pipeline artifact is an output from a job or stage such as binaries or reports used in later stages.
Pipeline as code?
+
Pipeline as code defines CI/CD pipelines using code or configuration files enabling version control and automation.
Pipeline as code?
+
Defining CI/CD pipelines in versioned files (YAML, Jenkinsfile) to track changes and standardize workflows.
Pipeline caching?
+
Pipeline caching stores dependencies or artifacts to speed up build times.
Pipeline concurrency?
+
Pipeline concurrency allows multiple pipelines or jobs to run simultaneously.
Pipeline drift?
+
Pipeline drift occurs when pipelines are inconsistent across environments or teams.
Pipeline environment variable?
+
Environment variable stores configuration values used by pipeline jobs.
Pipeline failure?
+
Pipeline failure occurs when a job or stage fails due to code errors test failures or configuration issues.
Pipeline job?
+
A job is a specific task executed in a pipeline stage like running tests or building artifacts.
Pipeline notifications?
+
Pipeline notifications alert teams about build or deployment status via email Slack or other channels.
Pipeline observability?
+
Pipeline observability monitors pipeline performance failures and bottlenecks.
Pipeline optimization?
+
Pipeline optimization improves speed reliability and efficiency of CI/CD processes.
Pipeline retry?
+
Pipeline retry reruns failed jobs automatically or manually.
Pipeline scheduling?
+
Pipeline scheduling triggers builds or deployments at specified times.
Pipeline visualization?
+
Pipeline visualization shows the flow of stages jobs and results graphically.
Post-deployment testing?
+
Post-deployment testing validates functionality performance and monitoring after deployment.
Pre-deployment testing?
+
Pre-deployment testing validates changes in staging or test environments before production deployment.
Production environment?
+
Production environment is where the live application runs and is accessible to end users.
Puppet in ci/cd?
+
Puppet automates infrastructure configuration management and compliance.
Regression test?
+
Regression test ensures that new changes do not break existing functionality.
Release in ci/cd?
+
Release is a version of the software ready to be deployed to production or other environments.
Role of a build server in ci/cd?
+
Build server automates compiling testing and packaging code changes for integration and deployment.
Role of automation in ci/cd?
+
Automation reduces manual intervention improves consistency speeds up delivery and ensures quality.
Rollback automation?
+
Rollback automation automatically reverts deployments when failures are detected.
Rollback in ci/cd?
+
Rollback is reverting a deployment to a previous stable version in case of issues.
Rollback strategy in ci/cd?
+
Rollback strategy defines procedures to revert deployments safely in case of failures.
Rollback testing?
+
Rollback testing validates the rollback process and ensures previous versions work correctly.
Rolling deployment?
+
Rolling deployment gradually replaces old versions with new ones across servers or pods.
Rolling deployment?
+
Deploying updates gradually to reduce downtime and risk.
Secrets management in ci/cd?
+
Secrets management securely stores sensitive information like passwords API keys or certificates.
Shift-left testing in ci/cd?
+
Shift-left testing moves testing earlier in the development lifecycle to catch defects sooner.
Smoke test?
+
Smoke test is a preliminary test to check basic functionality before detailed testing.
Sonarqube in ci/cd?
+
SonarQube analyzes code quality technical debt and vulnerabilities integrating into CI/CD pipelines.
Staging environment?
+
Staging environment mimics production to test releases before deployment.
Static code analysis in ci/cd?
+
Static code analysis inspects code without execution to find errors security issues or style violations.
System test?
+
System test validates the complete and integrated software system against requirements.
Terraform in ci/cd?
+
Terraform is an IaC tool used to define provision and manage infrastructure declaratively.
Test environment?
+
Test environment is a setup where testing is performed to validate software functionality and quality.
To handle secrets in ci/cd?
+
Use encrypted variables, secret management tools, or vault integration to store credentials securely.
To protect branches in github?
+
Use branch protection rules: require PR reviews, status checks, and restrict who can push.
To roll back commits in git?
+
Use git revert (creates a new commit) or git reset (rewinds history) depending on requirement.
Travis ci?
+
Travis CI is a hosted CI/CD service for building and testing software projects hosted on GitHub.
Trunk-based development?
+
Trunk-based development involves frequent commits to the main branch with short-lived feature branches.
Unit test?
+
Unit test verifies the functionality of individual components or functions in isolation.
Vault in ci/cd?
+
Vault is a tool for securely storing and managing secrets and sensitive data.
Version control in ci/cd?
+
Version control is the management of code changes using tools like Git or SVN for tracking and collaboration.
Version control integration?
+
CI/CD tools integrate with Git, SVN, or Mercurial to detect code changes and trigger pipelines.
Webhooks in github/bitbucket/gitlab?
+
Webhooks trigger external services when events occur, like push, PR, or merge events, enabling CI/CD and integrations.
Yaml in ci/cd?
+
YAML is a human-readable format used to define CI/CD pipeline configurations.
You monitor ci/cd pipelines?
+
Using dashboards, logs, notifications, or metrics for build health and performance.

Clean Architecture

+
Clean architecture?
+
A design pattern where dependencies flow inward, separating core business logic from frameworks, UI, and infrastructure.
Dependency rule?
+
Dependencies should always point inward toward high-level policies, not toward external frameworks or infrastructure.
Diffbet entity and dto?
+
Entity represents domain data with behavior; DTO is a simple data carrier between layers or services.
Diffbet layered and clean architecture?
+
Layered architecture is strictly horizontal; Clean Architecture emphasizes dependency inversion and decouples business logic from external concerns.
Does clean architecture support testing?
+
Business rules are isolated from UI and DB, allowing unit tests without mocking infrastructure.
Does it handle frameworks?
+
Frameworks are plug-ins; the core domain does not depend on frameworks, enabling easy replacement.
Example: using clean architecture in c#
+
Domain → core entities, Application → services/use cases, Infrastructure → DB, API → controllers.
Key layers?
+
Entities (core business), Use Cases (application logic), Interface Adapters (controllers/gateways), and Frameworks/Drivers (DB, UI).
Use case interactor?
+
An application service that orchestrates business rules for a specific use case.
Use clean architecture?
+
It improves testability, maintainability, decoupling, and allows technology changes without impacting core logic.

Code Reviews

+
Asynchronous code review?
+
Reviewing code at different times rather than in real-time meetings.
Asynchronous vs synchronous review?
+
Asynchronous: reviews done at different times; synchronous: live review sessions.
Automated code review?
+
Using tools to automatically check code quality security and standards compliance.
Automated code review?
+
Automated tools check for syntax, style, security, and performance issues. Examples include SonarQube, ESLint, and CodeClimate.
Benefits of code reviews?
+
Benefits include higher quality early defect detection knowledge sharing team alignment and maintainability.
Branching strategy review?
+
Ensuring code merges follow team branching strategy e.g. GitFlow or trunk-based.
Code complexity review?
+
Reviewing cyclomatic complexity and identifying overly complex code that is hard to maintain.
Code duplication review?
+
Checking for repeated code that should be refactored into reusable functions or modules.
Code ownership?
+
Code ownership defines responsibility for maintaining and improving specific modules or components.
Code refactoring?
+
Refactoring improves code structure and readability without changing its external behavior.
Code review acceptance criteria?
+
Clear conditions that must be met for code to pass the review.
Code review bottleneck?
+
Delays in merging code due to slow or insufficient reviews.
Code review checklist benefit?
+
Checklists ensure consistency reduce missed issues and improve review quality.
Code review etiquette for authors?
+
Be open to feedback respond professionally clarify questions and make required changes.
Code review etiquette for reviewers?
+
Be constructive specific respectful and focus on code not the author.
Code review for open-source projects?
+
Community-driven reviews to ensure quality maintainability and contribution guidelines.
Code review frequency?
+
Frequency at which code changes are submitted and reviewed ideally continuous or per feature.
Code review governance?
+
Policies and guidelines governing how code reviews are performed in an organization.
Code review kpi?
+
Metrics to measure effectiveness of code reviews e.g. defects found review time or team participation.
Code review workflow?
+
Workflow defines how code is submitted reviewed approved and merged.
Code review?
+
Code review is the systematic examination of source code by developers to identify mistakes improve quality and ensure adherence to standards.
Code review?
+
Code review is the systematic examination of code by peers to identify defects, improve quality, and ensure adherence to standards.
Code reviews help junior developers?
+
They learn best practices, design patterns, debugging techniques, and company coding standards from experienced developers.
Code reviews important?
+
They improve code quality reduce bugs enhance maintainability and facilitate knowledge sharing.
Code reviews important?
+
They improve code quality, knowledge sharing, maintainability, reduce bugs, and encourage consistency across the codebase.
Code smell?
+
A code smell is a symptom of poor code quality like duplicated code long methods or complex logic.
Code style review?
+
Checking adherence to naming conventions indentation spacing and formatting standards.
Coding standard?
+
Coding standards are agreed-upon rules for code style formatting and best practices.
Commit message review?
+
Reviewing that commit messages are descriptive meaningful and follow guidelines.
Common code review best practices?
+
Check for readability, maintainability, performance, security, adherence to coding standards, and proper documentation.
Common code review checklist?
+
Checklist includes readability naming conventions design patterns security performance and error handling.
Constructive feedback in code reviews?
+
Feedback that is specific actionable and focused on improving the code rather than criticizing the author.
Continuous code review?
+
Continuous review integrates code review into the CI/CD pipeline to catch issues as code is committed.
Continuous improvement in code reviews?
+
Iteratively improving review processes checklists and team practices.
Continuous learning from code reviews?
+
Team learns from defects patterns and best practices highlighted in reviews.
Cross-team code review?
+
Review conducted by members from other teams for knowledge sharing and better quality.
Cultural aspect of code reviews?
+
Fostering a culture of collaboration learning and constructive feedback.
Defensive coding review?
+
Review focusing on preventing errors handling edge cases and improving robustness.
Dependency review?
+
Checking external libraries or modules for compatibility security and versioning.
Design review vs code review?
+
Design review checks architecture and design decisions; code review focuses on implementation details.
Diffbet code review and code inspection?
+
Inspection is formal with documented findings; review can be informal or tool-assisted.
Diffbet code review and testing?
+
Code review finds defects in logic style and design; testing validates code behavior at runtime.
Diffbet code walkthrough and code review?
+
Walkthrough is informal guided explanation of code; review is systematic evaluation for defects.
Diffbet cr and qa?
+
Code review is done by developers for quality and maintainability; QA tests for functional correctness.
Diffbet formal and informal code reviews?
+
Formal reviews are structured with checklists and documentation. Informal reviews are lightweight, often over pull requests or pair programming.
Diffbet major and minor code review comments?
+
Major comments indicate critical issues affecting functionality or maintainability; minor comments are suggestions or style improvements.
Documentation review?
+
Ensuring code is well-commented and documentation accurately reflects functionality.
Dynamic code analysis?
+
Dynamic analysis evaluates code behavior during execution to identify runtime issues.
Error handling review?
+
Ensuring proper exception handling logging and graceful failure in code.
Formal code review?
+
Formal code review follows a structured process with defined roles meetings and checklists.
Incremental code review?
+
Reviewing small code changes frequently instead of large chunks at once.
Informal code review?
+
Informal review is a casual inspection of code without formal meetings or documentation.
Integration test review?
+
Ensuring integration tests verify interactions between modules and external systems.
Knowledge sharing in code reviews?
+
Code reviews help spread understanding of codebase best practices and design patterns among team members.
Linting?
+
Linting is automated checking of code for stylistic errors bugs or anti-patterns.
Logging review?
+
Reviewing that logs are meaningful not excessive and do not leak sensitive data.
Main types of code reviews?
+
Types include formal reviews informal reviews pair programming and tool-assisted reviews.
Maintainability in code review?
+
Ensuring code is easy to read understand extend and debug by other developers.
Mentor-driven review?
+
Experienced developers provide guidance and suggestions to less experienced team members.
Metrics can you track in code reviews?
+
Number of issues found, time spent per review, code coverage, and review participation rates.
Metrics for reviewer performance?
+
Metrics include number of reviews done quality of feedback and response time.
Modularity in code review?
+
Code should be organized into reusable independent modules for easier maintenance.
Often should code reviews be conducted?
+
Ideally for every feature branch or pull request before merging into the main branch to catch issues early.
Onboarding through code reviews?
+
New developers learn coding standards practices and codebase structure via reviews.
Over-reviewing?
+
Spending excessive time on minor issues reducing efficiency or demotivating the author.
Pair programming?
+
Two developers work together on the same code; one writes code while the other reviews in real-time.
Peer accountability in code reviews?
+
Ensuring all team members participate and contribute responsibly to reviews.
Peer code review?
+
A peer code review is when developers review each other’s code to ensure it meets quality and design standards.
Peer feedback in code review?
+
Feedback provided by peers to improve code quality and knowledge sharing.
Peer programming vs code review?
+
Pair programming involves simultaneous coding and reviewing; code review happens after code is written.
Peer review?
+
Peer review is a process where colleagues examine each other’s code for quality and correctness.
Performance code review?
+
Review emphasizing efficient algorithms memory usage and scalability.
Post-mortem code review?
+
Review conducted after a production issue to understand root cause and prevent recurrence.
Pull request (pr)?
+
A PR is a request to merge code changes into a repository often reviewed by peers before approval.
Pull request size best practice?
+
Keep PRs small and focused to facilitate faster and more effective reviews.
Readability in code review?
+
Readable code is clear consistent well-named and easily understandable.
Re-review?
+
Reviewing updated code after initial review comments have been addressed.
Resolved comment?
+
A resolved comment is a review comment that has been addressed by the author.
Review approval?
+
Formal acceptance that code meets standards and is ready to merge.
Review automation benefit?
+
Automation speeds up checks enforces standards and reduces human errors.
Review backlog?
+
A queue of pending code reviews awaiting reviewer attention.
Review comment categorization?
+
Classifying comments as major minor suggestion or question for prioritization.
Review comment?
+
A review comment is feedback provided by a reviewer to improve code quality.
Review coverage?
+
Percentage of code changes that undergo review before merging.
Review etiquette for large teams?
+
Clear responsibilities communication and avoiding conflicting feedback.
Review etiquette?
+
Etiquette includes being respectful constructive specific and avoiding personal criticism.
Review feedback loop?
+
Process of submitting reviewing addressing comments and re-reviewing until approval.
Review for legacy code?
+
Reviewing existing code to identify improvements refactoring needs and risks.
Review for refactoring?
+
Review ensuring that refactored code improves structure and readability without introducing bugs.
Review in ci/cd pipeline?
+
Code review integrated into CI/CD to prevent defective code from being merged.
Review metrics analysis?
+
Analyzing review metrics to improve quality efficiency and team collaboration.
Review turnaround time?
+
The time taken for a reviewer to provide feedback on submitted code.
Reviewer rotation?
+
Rotating reviewers to spread knowledge and avoid bias in reviews.
Risks of poor code reviews?
+
Risks include bugs security vulnerabilities inconsistent style technical debt and slower development.
Role of a reviewer?
+
The reviewer evaluates code quality suggests improvements ensures standards are followed and identifies defects.
Role of an author?
+
The author writes the code addresses review comments and ensures changes meet quality standards.
Root cause analysis in code review?
+
Understanding why defects occur to prevent similar issues in the future.
Scalability review?
+
Reviewing code to ensure it can handle increasing workload or number of users effectively.
Security code review?
+
Review focusing on identifying security vulnerabilities such as SQL injection XSS or authentication flaws.
Security review?
+
Review specifically for vulnerabilities sensitive data exposure and compliance issues.
Self-review?
+
Author reviews their own code before submitting it for peer review.
Should you check in a code review?
+
Check correctness readability maintainability security performance and adherence to coding standards.
Some popular code review tools?
+
Tools include GitHub Pull Requests GitLab Merge Requests Bitbucket Crucible Review Board and Phabricator.
Static code analysis?
+
Static analysis uses automated tools to analyze code without executing it detecting errors and enforcing standards.
Technical debt identification in code review?
+
Identifying suboptimal code or shortcuts that may require future refactoring.
Test coverage review?
+
Ensuring code has adequate automated test coverage for all critical paths.
Testability review?
+
Ensuring code is easy to test with unit integration or automated tests.
To give constructive feedback in code reviews?
+
Focus on code, not the developer, explain why changes are needed, suggest improvements, and be respectful and encouraging.
To handle conflicts during code review?
+
Discuss objectively with examples, refer to coding standards, involve a neutral reviewer if necessary, and focus on project goals.
Tool-assisted code review?
+
Using software tools (like GitHub GitLab Crucible) to comment track and approve code changes.
Tools are used for code reviews?
+
Popular tools include GitHub Pull Requests, GitLab Merge Requests, Azure DevOps, Crucible, and Bitbucket.
Under-reviewing?
+
Skipping important checks or approving low-quality code without proper examination.
Unit test review?
+
Ensuring that automated unit tests exist are comprehensive and correctly test functionality.
You balance speed and quality in code reviews?
+
Focus on critical issues first, use automated tools for repetitive checks, and avoid overloading reviewers to maintain efficiency.
You ensure code review consistency across a team?
+
Establish coding standards, use review checklists, and train team members on the review process.
You handle a large code change in a review?
+
Break it into smaller logical chunks, review incrementally, and prioritize high-risk areas first.

Creatio CRM

+
‘actions’ in a creatio workflow?
+
Tasks executed automatically: sending email, assigning owner, updating records, creating tasks, notifications, etc. :contentReference[oaicite:24]{index=24}
‘conditions and rules’ in a creatio workflow?
+
Logical criteria used inside workflows to branch paths: e.g. if amount > X then route to manager; else proceed to next step. :contentReference[oaicite:23]{index=23}
360‑degree customer view in creatio?
+
A unified profile that stores contact info, interaction history, orders/cases/contracts — giving full visibility across departments. :contentReference[oaicite:4]{index=4}
Advantages of using workflow automation vs manual processes?
+
Consistency, reduced errors, speed, auditability, scalability, and freeing up human resources for strategic work. :contentReference[oaicite:25]{index=25}
Ai‑native crm in creatio?
+
AI is embedded at the core: predictive, generative, and agentic AI features (lead scoring, automated actions, email generation, insights) to support CRM tasks. :contentReference[oaicite:12]{index=12}
Ai‑powered lead scoring in creatio?
+
AI analyzes lead data/history to assign scores to leads — helping sales/marketing prioritize high‑potential leads automatically.
Api integrations in creatio crm?
+
REST / API endpoints provided by Creatio to integrate with external systems (ERP, e‑commerce platform, telephony, webhooks, etc.). :contentReference[oaicite:35]{index=35}
Api rate‑limiting and performance considerations for integrations in creatio?
+
When using APIs for integrations, pay attention to request rates, data volume, and trigger load to avoid performance issues.
Approach for a business continuity plan involving creatio crm (downtime, disaster)?
+
Have backups, redundancy, plan for failover, offline data access if supported, data export strategy, manual process fallback.
Approach for auditing user permissions and data access regularly in creatio?
+
Run audit logs, review roles, validate access levels, revoke unused permissions, enforce least privilege principle.
Approach for customizing creatio ui for brand/organization requirements?
+
Configure layouts, themes, labels, custom fields, modules, and optionally custom code/extensions if needed.
Approach for gdpr / data‑privacy compliance with creatio in eu or regulated regions?
+
Implement consent fields, data access controls, data retention / purge policies, audit logs, role‑based permissions.
Approach for handling data migration during major schema changes in creatio?
+
Export existing data, map to new schema, transform as needed, import to new model, validate data integrity, test workflows.
Approach for integrating creatio with e‑commerce or web‑forms (lead capture)?
+
Use APIs/webhooks to push form data to Creatio, auto-create leads or contacts, trigger workflows for follow-up or assignment.
Approach for long‑term scalability and maintainability of custom apps built on creatio?
+
Document schema and workflows, follow naming and versioning standards, modular design, regular review and cleanup.
Approach for migration from another crm to creatio without losing history and data relationships?
+
Extract full data including history, map entities/relationships, import in correct order (e.g. accounts before opportunities), maintain IDs or references, test thoroughly.
Approach for multi‑department collaboration using creatio across sales, service, marketing?
+
Define shared workflows, permissions, data model; ensure proper assignment and notifications; use unified customer profile.
Approach for testing performance under load with many concurrent workflows/users in creatio?
+
Simulate load, monitor response times, optimize workflows, scale resources, archive old data, avoid heavy triggers.
Approach for user feedback and continuous improvement after rollout of creatio?
+
Collect user feedback, analyze issues, refine workflows/UI, conduct periodic training, and update documentation.
Approach to ensure data integrity when multiple integrations write to creatio?
+
Implement validation rules, transaction checks, error handling, deduplication logic and monitoring to prevent data corruption.
Approach to incremental rollout of creatio to large organization?
+
Pilot with small user group, gather feedback, refine workflows, train next group, gradually expand — reduce risk and ensure adoption.
Approach to integrate creatio with external analytics tool (bi)?
+
Use APIs to export CRM data or connect BI tool to database; schedule regular exports; maintain data integrity and mapping.
Approach to retire or archive old/unused data or workflows in creatio?
+
Identify deprecated records/processes, archive or delete, update workflows to avoid referencing removed data, backup before cleaning.
Audit & compliance readiness when using creatio for regulated industries (finance, healthcare)?
+
Use access controls, audit logs, encryption, data retention/archival policies, strict permissions and workflow approvals.
Audit compliance (e.g. gdpr, iso) support in creatio?
+
Use audit logs, permissions, role‑based access, data retention policies, secure integrations to comply with regulatory requirements.
Audit log for user actions in creatio?
+
Records user activities — login, data modifications, workflow executions — useful for security, compliance, and tracking.
Audit logging frequency and storage management when many user activities logged in creatio?
+
Define retention policies, purge or archive older logs, store securely — avoid excessive storage while maintaining compliance.
Audit trail / history tracking in creatio?
+
Record changes to data — who changed what and when — useful for compliance, tracking updates, accountability.
Backup and disaster recovery planning with creatio crm?
+
Regular backups, off‑site storage, redundancy, version control to ensure data safety in case of failures or data corruption.
Benefit of crm + bpm (business process management) combined, as with creatio, compared to standard crm?
+
Allows not only managing customer data but automating operational, internal and industry‑specific business processes — increases efficiency and flexibility.
Benefit of modular licensing model for growing businesses?
+
They can add modules/users as needed, scale gradually without paying for unneeded features upfront.
Benefits of low‑code crm for businesses, as offered by creatio?
+
Faster deployment, lower dependence on developers, reduced costs, and flexible adaptation to changing business needs. :contentReference[oaicite:10]{index=10}
Best practice for naming conventions (entities, fields, workflows) in creatio customisation?
+
Use meaningful names, consistent prefixes/suffixes, document definitions — helps maintain clarity and avoid conflicts.
Best practice for testing custom workflows in creatio before production?
+
Use sandbox, test for all edge cases, verify permissions, simulate data inputs, run load tests, backup data.
Best way to manage schema changes (entities, fields) in creatio over time?
+
Define change log, version workflows, document changes, backup data, communicate to stakeholders, test in sandbox.
Bulk data import/export in creatio?
+
Supports bulk import/export operations (CSV/Excel) for contacts, leads, data migration, backups, and mass updates.
Can creatio be used beyond crm — e.g. for hr, project management, internal workflows?
+
Use its low‑code BPM / workflow engine and custom entities to model internal processes (onboarding, approvals, project tracking).
Can creatio help a service/support team improve customer resolution time?
+
By automating ticket routing, SLA enforcement, case assignment, and using AI agents to suggest responses or prioritize cases. :contentReference[oaicite:26]{index=26}
Can non‑technical users customise creatio crm?
+
Yes — business users (sales/marketing/service) can use visual designers to build workflows, layouts, dashboards, etc., without coding. :contentReference[oaicite:11]{index=11}
Can you customize ui layouts and dashboards in creatio without coding?
+
Using visual designers in Creatio’s studio — drag‑and‑drop fields, panels, dashboards; rearrange layouts as per business needs. :contentReference[oaicite:21]{index=21}
Can you extend creatio with custom code when no‑code tools are not enough?
+
Use provided SDK/API, write custom scripts/integrations, use REST endpoints or external services — while keeping core no‑code logic separate.
Can you implement marketing roi tracking in creatio?
+
Use campaign and lead‑to‑sale tracking, assign leads to campaigns, track conversions, revenue, attribution and generate reports/dashboards.
Change management best practice when implementing creatio?
+
Define business processes clearly, plan roles/permissions, test workflows in sandbox, migrate data carefully, train users, and roll out incrementally.
Change management when business processes evolve — to update creatio workflows?
+
Use versioning, test updated workflows in sandbox, communicate changes, train users — avoid breaking active business flows.
Changelog or release management in creatio when you update workflows?
+
Track and manage workflow changes; test in sandbox; deploy to production safely; rollback if needed.
Common challenges when implementing creatio crm?
+
Data migration complexity, initial learning curve for customisation/workflows, planning roles/permissions properly, defining business processes before building.
Common use‑cases for workflow automation in creatio?
+
Lead → opportunity process, ticket/case management, loan/credit application, onboarding workflows, approvals, order‑to‑invoice flows, etc. :contentReference[oaicite:18]{index=18}
Configuration vs customization in creatio?
+
Configuration = using interface/tools to set up CRM without coding; customization = writing scripts or using advanced settings where needed.
Contact and lead management in creatio?
+
It enables capturing leads/contacts, managing their data, tracking communications and statuses until conversion. :contentReference[oaicite:6]{index=6}
Contract/invoice/order management inside creatio?
+
CREATIO allows creation/tracking of orders, generating invoices/contracts, tracking status — integrating financial/business transactions within CRM. :contentReference[oaicite:33]{index=33}
Core modules available in creatio crm?
+
Sales, Marketing, Service (customer support), plus a studio/platform for custom apps & workflows. :contentReference[oaicite:3]{index=3}
Creatio crm?
+
Creatio CRM is a cloud‑based CRM and business‑process automation platform that unifies Sales, Marketing, Service, and workflow automation on a low‑code/no‑code + AI‑native platform. :contentReference[oaicite:1]{index=1}
Creatio marketplace?
+
A repository of 700+ applications/integrations/templates to extend functionality and adapt CRM to different industries or needs. :contentReference[oaicite:13]{index=13}
Custom app in creatio?
+
An application built on Creatio’s platform (using low‑code tools) tailored for specific business processes beyond standard CRM (e.g. HR, project management, vertical‑specific flows).
Custom entity / object in creatio?
+
Users can define new entities (tables) beyond standard CRM ones to map to business‑specific data (e.g. Projects, Vendors).
Custom field in creatio?
+
Extra field added to existing entity (contact, account, opportunity etc.) to store business‑specific data (like tax ID, region code, etc.).
Custom report building for cross‑module analytics in creatio (e.g. sales + service + marketing)?
+
Define queries combining multiple entities, set filters/aggregations, schedule reports/dashboards — useful for overall business insights.
Custom reporting vs standard reporting in creatio?
+
Standard reports are pre‑built for common needs; custom reports are built by users to meet specific data/metric requirements (fields, filters, aggregations).
Customer life‑cycle management in creatio?
+
Tracking from first contact (lead) to long-term relationship — including sales, service, upsell, renewals, support — unified under CRM.
Customer portal capability in creatio for external users?
+
Option for customers to access portal to submit tickets, check status, view history (where supported by configuration). :contentReference[oaicite:38]{index=38}
Customer service (support) automation in creatio?
+
Support teams can manage tickets/cases, SLAs, communication across channels — streamlining service workflows. :contentReference[oaicite:8]{index=8}
Customizable workflow for onboarding new employees inside creatio (hr use‑case)?
+
Define process: create employee record → assign manager → set tasks → approvals → activation — all via CRM workflows.
Customization of workflows per geography or business unit in creatio?
+
Define different workflows per region/business unit using the flexible low‑code platform configuration.
Customization vs out‑of‑box use in creatio?
+
Out‑of‑box means using standard modules with minimal config; customization involves building custom fields, workflows, layouts or apps to tailor to specific needs.
Customizing creatio for project management instead of pure crm?
+
Use custom entities (Projects, Tasks, Milestones), relationships, workflows to manage projects and collaboration inside Creatio.
Data backup and restore in creatio?
+
Ability (or need) to backup CRM data periodically and restore if needed — ensuring data safety (depending on deployment model).
Data deduplication and duplicate detection in creatio?
+
Mechanism to detect duplicate contacts/leads, merging duplicates, and ensuring data integrity.
Data export from creatio?
+
Export contacts, leads, reports, analytics or any list to CSV/Excel to allow sharing or offline analysis.
Data import in creatio?
+
Ability to import existing data (contacts, leads, accounts) from external sources (CSV, Excel, other CRMs) into Creatio CRM.
Data privacy and gdpr / region‑compliance support in creatio?
+
Controls over personal data storage, permissions, access logs, ability to anonymize or delete personal data as per compliance needs.
Data transformation during import in creatio?
+
Mapping legacy fields to new schema, cleaning data, applying rules to convert/validate data before import — helps ensure data quality.
Describe you’d implement lead-to-cash process in creatio?
+
Explain mapping of entities (Lead → Opportunity → Order → Contract/Invoice), workflows (lead scoring, assignment, approval), and integration with billing/ERP.
Diffbet cloud deployment vs on‑premise deployment (if offered) for creatio?
+
Cloud: easier scaling, maintenance; on-premise: more control over data, possibly required for compliance or data‑sensitive businesses.
Diffbet synchronous and asynchronous tasks in workflow processing (in principle)?
+
Synchronous executes immediately; asynchronous can be scheduled/delayed or run in background — helps avoid blocking and allows scalable processing.
Diffbet using creatio for only crm vs full bpm + crm use-case?
+
CRM-only: sales/marketing/service. Full BPM: includes internal operations, HR, procurement, approvals, custom workflows.
Does 'composable architecture' mean in creatio?
+
You can mix and match modules, workflows, custom apps as building blocks — composing CRM to business‑specific workflows without writing new code. :contentReference[oaicite:14]{index=14}
Does creatio help in reducing total cost of ownership compared to traditional crm systems?
+
Because of its low‑code nature and pre-built modules/integrations, businesses can avoid heavy development costs and still get customizable CRM. :contentReference[oaicite:19]{index=19}
Does creatio help in regulatory compliance or audit readiness?
+
Through audit trails, role‑based access, record‑history, SLA tracking, and permissions/configuration to secure data and processes.
Does creatio support collaboration across teams?
+
Shared database, unified UI, communication and task‑assignment workflows, role‑based permissions, cross‑team visibility. :contentReference[oaicite:29]{index=29}
Does creatio support mobile access?
+
Yes — there is mobile access so users can manage CRM data and tasks on the go. :contentReference[oaicite:17]{index=17}
Does creatio support order / invoice / contract management?
+
Yes — in addition to CRM, it supports orders, invoices and contract workflows (order/contract management via CRM modules). :contentReference[oaicite:16]{index=16}
Does low-code / no-code mean in creatio?
+
It means you can design workflows, applications, UI layouts and business logic via visual designers (drag‑and‑drop, configuration) instead of writing code. :contentReference[oaicite:2]{index=2}
Effort estimation when migrating legacy crm/data to creatio?
+
Depends on data volume, number of modules, custom workflows; small CRM migration may take days, complex might take weeks with cleaning/mapping.
Error handling and retry logic in automated workflows in creatio?
+
Define fallback steps, alerts/notifications on failure, retrials or escalations to avoid data loss or stuck workflows.
Fallback/backup workflow when primary automation fails in creatio?
+
Design error-handling steps: notifications, manual task creation, retries, logging — ensure no data/process loss.
Feature request and custom extension process for creatio when built-in features are insufficient?
+
Use Creatio’s platform to build custom fields/ entities; optionally develop custom code or use external services integrated via API.
Global query in creatio (search across crm)?
+
Search across contacts, leads, accounts, cases, opportunities etc — unified search to find any record quickly across modules.
Help‑desk / ticketing workflow in creatio service module?
+
Automated case creation, assignment, SLA monitoring, escalation rules, status tracking, notifications, and case history management. :contentReference[oaicite:31]{index=31}
Integration capabilities does creatio support?
+
APIs and pre-built connectors to integrate with external systems (ERP, email, telephony, third‑party tools) for seamless data flow. :contentReference[oaicite:15]{index=15}
Integration testing when creatio interacts with external systems (erp, e‑commerce)?
+
Test data exchange, error handling, latency, API limits, conflict resolution — in sandbox before go-live.
Integration with external systems (erp, e‑commerce, telephony) via creatio apis?
+
Use built‑in connectors or REST APIs to sync data between Creatio and external systems (orders, inventory, customer data) for unified operations. :contentReference[oaicite:44]{index=44}
Kind of businesses benefit most from creatio?
+
Mid‑size to large enterprises with complex sales/service/marketing processes needing flexibility, automation, and scalability. :contentReference[oaicite:30]{index=30}
Knowledge base management in creatio service module?
+
Store FAQs, manuals, service guides — searchable knowledge base to help agents and customers resolve issues quickly. :contentReference[oaicite:39]{index=39}
Lead nurturing in creatio?
+
Automated sequence of interactions (emails, reminders, tasks) to gradually engage leads until they are sales-ready (qualified).
Lead-to-order process in creatio?
+
Flow from lead capture → qualification → opportunity → order → contract/invoice generation — all managed through CRM workflows.
License & pricing model for creatio (user‑based, module‑based)?
+
Creatio uses modular licensing — clients pay per user per module(s) — flexibility to subscribe only to needed modules. :contentReference[oaicite:45]{index=45}
Marketing automation in creatio?
+
Tools to run campaigns, nurture leads, segment contacts, automate email/social campaigns, measure results — all within CRM. :contentReference[oaicite:7]{index=7}
Marketing campaign workflow in creatio?
+
Lead segmentation → campaign initiation → email/social outreach → track responses → scoring → follow‑ups or nurture → convert to opportunity.
Monitoring & alerting setup for sla / ticketing workflows in creatio?
+
Configure alerts/notifications on SLA breach, escalation rules, dashboards for SLA compliance tracking.
Multi‑channel customer communication in creatio?
+
Support for email, phone calls, chat, social media — all interactions logged and managed centrally. :contentReference[oaicite:43]{index=43}
Multitenancy support in creatio (for agencies)?
+
Ability to manage separate organizations/business units under same instance with segregated data and permissions.
No-code agent builder in creatio?
+
A visual tool where users can assemble AI agents (with skills, workflows, knowledge bases) without writing code — enabling automation, content generation, notifications, etc. :contentReference[oaicite:27]{index=27}
Omnichannel communication support in creatio?
+
Handling customer interactions across multiple channels (email, phone, chat, social) unified under CRM to track history and response. :contentReference[oaicite:34]{index=34}
Performance monitoring / logging in creatio for workflows and system usage?
+
Track execution times, error rates, user activity, data volume — helps identify bottlenecks or abuse.
Performance optimization in creatio?
+
Use As‑needed workflows, limit heavy triggers, archive old data, optimize reports, and use no‑tracking dashboards for speed.
Pipeline (sales pipeline) management in creatio?
+
Visual pipeline tools that let you track deals across stages, forecast revenue, and manage opportunities from lead through closure. :contentReference[oaicite:5]{index=5}
Pre‑built industry‑specific workflows in creatio?
+
Templates and predefined workflows tailored to verticals (finance, telecom, services, etc.) for common business processes — reducing need to build from scratch. :contentReference[oaicite:28]{index=28}
Process to add a new module or functionality in creatio after initial implementation?
+
Use studio to configure module, define entities/fields/workflows, set permissions, test, and enable for users — without major downtime.
Real-time analytics vs scheduled reports in creatio?
+
Real-time analytics updates with data changes; scheduled reports are generated at intervals (daily/weekly/monthly) for review or export.
Recommended backup frequency for crm system like creatio?
+
Depends on volume and business needs — daily or weekly backups for critical data; more frequent for high‑transaction systems.
Recommended user onboarding/training plan when company moves to creatio?
+
Role‑based training, sandbox exploration, hands‑on tasks, documentation, support, phased adoption and feedback loop.
Reporting and analytics in creatio?
+
Customizable dashboards and reports to track KPIs — sales performance, marketing campaign ROI, service metrics, team performance, etc. :contentReference[oaicite:40]{index=40}
Role of metadata/schema management in creatio custom apps?
+
Define custom entities/tables, fields, relationships, data types — maintain schema for custom business needs without coding.
Role‑based access control (rbac) in creatio?
+
You can define roles and permissions to control which users or teams access which data/modules/features in CRM — ensuring security and proper access. :contentReference[oaicite:20]{index=20}
Rollback plan when automated workflows produce unintended consequences (e.g. wrong data update)?
+
Use backups, audit logs to identify changes, revert changes or re‑process via scripts or manual corrections, notify stakeholders.
Rollback strategy for a failed workflow or customization in creatio?
+
Restore from backup, revert to previous workflow version, run data correction scripts, notify users and audit changes.
Sales forecasting in creatio crm?
+
Based on pipeline data and past history, predicting future sales, revenue and chances of deal closure using built‑in analytics/AI tools. :contentReference[oaicite:32]{index=32}
Sandbox or test environment in creatio before production deployment?
+
A separate instance or environment where you can test workflows, customizations, and integrations before applying to live data.
Sandbox testing best practices before deploying workflows in enterprise creatio?
+
Test all branches, edge cases, user roles, data flows; verify security; backup data; get stakeholder sign-off.
Sandbox vs production environment in creatio implementation?
+
Sandbox used for testing customizations and workflows; production is live environment — helps avoid disrupting live data.
Scalability concern when many custom workflows and integrations are added to creatio?
+
Ensure optimized workflows, limit heavy triggers, archive old data, monitor performance — avoid overloading instance.
Scalability of creatio for large enterprises?
+
With cloud/no‑code + modular architecture, Creatio supports large datasets, many users, and complex workflows across departments. :contentReference[oaicite:42]{index=42}
Security and permissions model in creatio?
+
Role‑based permissions, access control on modules/data, record-level permissions to ensure data security and compliance. :contentReference[oaicite:36]{index=36}
Separation of environments (development, staging, production) in creatio deployment?
+
Maintain separate environments to develop/test customizations, test integrations, then deploy to production safely.
Sla configuration for service tickets in creatio?
+
Ability to define service‑level agreements, monitor response times/resolution deadlines, automate reminders/escalations when SLAs are near breach. :contentReference[oaicite:37]{index=37}
Soft delete vs hard delete of records in creatio?
+
Soft delete marks record inactive (kept for history/audit); hard delete removes record permanently (used carefully to avoid data loss).
Strategy for managing multi‑region compliance & localization when using creatio globally?
+
Use localized fields, regional data storage policies, consent management, region‑specific workflows and permissions per region.
Support and maintenance requirement after creatio deployment?
+
Monitor system performance, update workflows, backup data, manage permissions, handle upgrades and user support.
Support for gdpr / data privacy enforcement in creatio workflows?
+
Configure consent fields, access permissions, data retention policies, anonymization procedures where applicable.
Support for multiple currencies and multi‑region data in creatio?
+
Configure fields and entities to support currencies, localization, region‑specific workflows for global businesses.
Support for multiple languages in ui and data in creatio?
+
Locales and language packs — ability to configure UI labels, messages, data format for global teams/customers.
Support for role-based dashboards and views in creatio?
+
Managers, sales reps, support agents can have tailored dashboards showing data relevant to their role.
Testing strategy for new workflows or custom apps in creatio?
+
Use sandbox environment, simulate all scenarios, test edge cases, verify data integrity, run performance tests, get user sign‑off before production.
To build a customer feedback survey workflow within creatio?
+
Create survey entity, send survey via email/workflow after service/ticket resolution, collect responses, store data, trigger follow‑ups based on feedback.
To design backup & disaster recovery for medium / large creatio deployments?
+
Define backup schedule, off‑site storage, redundant servers/cloud, periodic recovery drills, documentation of restore procedures.
To ensure performance when running large bulk data imports into creatio?
+
Use batch imports, disable triggers if needed, split data into chunks, validate beforehand, monitor system load.
To evaluate whether to use out‑of‑box features vs build custom workflows in creatio?
+
Compare business requirements vs built-in features, consider complexity, maintenance cost, performance, ease of use before customizing.
To handle duplicates and data quality issues during migration to creatio?
+
Use deduplication logic, validation rules, manual review for conflicts, maintain audit logs of merges/cleanup.
To handle feature-request backlog and maintain roadmap when using low‑code platform like creatio?
+
Prioritise based on impact, maintain documentation, version workflows, schedule releases, gather user feedback, test before deployment.
To implement audit‑ready workflow logging and reporting in creatio for compliance audits?
+
Enable audit logs, track user actions and changes, store history, provide exportable reports for compliance reviews.
To implement cross‑department workflow (e.g. sales → service → billing) in creatio?
+
Define entities and relationships, build multi-step workflows, set permissions per department, use shared customer data, notifications and handoffs.
To implement lead scoring and prioritisation using creatio built‑in ai features?
+
Configure lead attributes, enable AI lead scoring, define thresholds/triggers, auto‑assign or notify sales reps for high‑value leads.
To implement time‑based or scheduled workflows (e.g. follow‑ups after 30 days) in creatio?
+
Use scheduling features or time‑based triggers to automatically perform actions after specified intervals.
To integrate creatio with external analytics/bi platform for advanced reporting?
+
Use API/data export, build ETL pipelines or direct DB connections, schedule data sync, design reports as per business needs.
To manage data privacy and user consent (for marketing) inside creatio?
+
Add consent fields, track opt‑in/opt‑out, restrict data access, implement data retention policies, maintain audit logs.
To manage version control and deployment of customizations across multiple environments (dev, test, prod) in creatio?
+
Use sandbox for dev/testing, version workflows, document changes, test thoroughly, smooth promotion to production, track differences.
To migrate crm data and business logic from legacy system to creatio with minimal downtime?
+
Plan extraction, mapping, pilot import/test, validate data, run parallel systems during cut-over, communicate with users, backup data.
To monitor and handle performance issues when many automations and workflows are active in creatio?
+
Use logs and analytics, identify heavy workflows, optimize them, archive inactive items, scale resources, apply caching where possible.
To prepare for creatio crm implementation project?
+
Define business processes clearly, map data schema, prepare migration plan, define roles/permissions, set up sandbox, schedule training, plan rollout phases.
To set up role‑based dashboards and permission‑based record visibility in creatio?
+
Define roles, assign permissions per module/entity, configure dashboards per role to show only relevant data.
Training and onboarding support for new creatio users?
+
Use sandbox/demo environment, tutorials, documentation, role‑based permissions, and phased rollout to help adoption.
Typical migration scenario when moving to creatio from legacy crm?
+
Mapping legacy data fields to Creatio schema, cleaning data, importing contacts/leads, configuring workflows, roles, custom fields, and training users.
Typical steps for data migration into creatio from legacy systems?
+
Data extraction → cleansing → mapping to Creatio schema → import → validation → testing → go‑live.
Ui localization / multiple languages support in creatio?
+
Creatio supports multi‑language UI configuration to support global teams and clients in different regions.
Use of version history / audit trail for compliance or internal audits in creatio?
+
Track data changes, user actions, workflow executions to provide transparency, accountability and support audits.
Use‑case: building a custom internal project management tool inside creatio?
+
Define Projects, Tasks entities; set relationships; build task assignment and tracking workflows, notifications, dashboards — custom app built on low‑code platform.
Use‑case: building customer self‑service portal through creatio?
+
Expose case/ticket submission, status tracking, knowledge base, chat/email support — allowing customers to self-serve while CRM tracks interactions.
Use‑case: complaint resolution and feedback loop automation?
+
Customer complaint entered → auto‑assign → send acknowledgement → schedule resolution → send feedback / survey after resolution — tracked in CRM.
Use‑case: custom compliance workflow for regulated industries (approvals, audits, documentation) in creatio?
+
Design approval workflows, audit logging, document storage, permissions, version history to meet compliance requirements.
Use‑case: customer onboarding workflow (for saas) using creatio?
+
Lead → contact → contract → onboarding tasks → welcome email → user training — all steps managed via workflow automation.
Use‑case: customizing dashboards for executive leadership to shigh-level kpis?
+
Create dashboard combining sales pipeline, revenue forecast, service metrics, marketing ROI, customer satisfaction — for strategic decisions.
Use‑case: data archive and retention policies for old records in creatio for compliance / performance reasons?
+
Archive old data, soft‑delete records, purge logs after retention period — maintain performance and compliance.
Use‑case: event management (seminars, webinars) using creatio crm?
+
Registrations (leads), automated reminders, post-event follow‑ups, lead scoring, conversion to opportunity — full workflow in CRM.
Use‑case: globalization and multi‑region sales process with localisation (currency, language) in creatio?
+
Configure multi-currency fields, localization settings, region-based workflows, and assign regional teams — manage global operations.
Use‑case: handling subscription renewals and recurring billing pipelines in creatio?
+
Use workflows to send renewal reminders, generate invoices/contracts, update statuses, notify account managers — automating subscription lifecycle.
Use‑case: hr onboarding/offboarding and employee record management in creatio?
+
Employee entity, onboarding workflow, access assignment, role-based permissions, offboarding tasks — manageable via low‑code workflows.
Use‑case: integrating creatio with erp for order-to-cash process?
+
Sync customer/order data, invoices, inventory, payment status — ensure full order lifecycle from lead to cash in coordinated systems.
Use‑case: integrating telephony or pbx into creatio for call logging and click-to-call?
+
Use built‑in connectors or APIs to log calls, record interaction history, trigger follow-up tasks — unified communication tracking. :contentReference[oaicite:46]{index=46}
Use‑case: marketing nurture + re‑engagement workflows for dormant clients?
+
Segment old clients, run email/social campaigns, schedule follow-up tasks, track engagement, convert to opportunity if interest resumes.
Use‑case: marketing‑to‑sales handoff automation in creatio?
+
Marketing captures lead → nurtures → scores lead → when qualified, auto‑assign to sales rep → create opportunity → notify sales team — handoff automated.
Use‑case: multi‑team collaboration (sales + support + finance) for order & invoice process in creatio?
+
Shared data (customer, orders, invoices), workflows for approval, notifications across departments, status tracking — unified operations.
Use‑case: role-based dashboards and permissions for different teams in creatio?
+
Sales dashboard for sales team; support dashboard for service team; finance dashboard for billing — each with restricted access per role.
Use‑case: subscription‑based service lifecycle and renewal tracking using creatio?
+
Contracts entity, renewal dates, reminder workflows, invoice generation, customer communication — automate renewals and billing.
Use‑case: support ticket escalation and sla enforcement using creatio service module?
+
Ticket created → auto‑assign → SLA timer & reminder → if SLA breach, auto‑escalate or alert manager → resolution tracking.
Use‑case: vendor/supplier management (b2b) using creatio custom entities?
+
Define Vendor entity, track interactions, purchase orders, contracts, approvals — manage vendor lifecycle inside CRM.
User activity / task management within creatio?
+
Users/teams can create tasks, assign to others, track progress; integrated with CRM workflow and customer data.
User activity monitoring and analytics in creatio for management?
+
Track login history, record edits, workflow execution stats, error rates — use dashboards to monitor productivity, compliance and usage patterns.
User adoption strategy when switching to creatio crm in a company?
+
Communicate benefits, involve key users early, provide training, create incentives, gather feedback and iterate workflows.
User roles and permission hierarchy in large organizations using creatio?
+
Define roles (admin, sales rep, support agent, manager), assign permissions by module/record/field to enforce security and privacy.
User training approach when adopting creatio in an organization?
+
Role-based training, sandbox practice, documentation, mentorship, phased rollout, and gathering user feedback to refine workflows.
Version control for customizations in creatio?
+
Track changes to custom apps/workflows, manage versions or rollback if needed (depends on deployment/config).
Vertical‑specific (industry‑specific) workflow template in creatio?
+
Pre-built process templates for industries (finance, telecom, services) tailored to standard operations in that industry. :contentReference[oaicite:41]{index=41}
Webhook or external trigger support in creatio (for integrations)?
+
Creatio can integrate external triggers or webhooks to react to external events (e.g. from other systems) to start workflows.
Workflow automation in creatio?
+
Automated workflows that trigger actions (notifications, updates, assignments) based on events or conditions to reduce manual tasks. :contentReference[oaicite:9]{index=9}
Workflow trigger’ in creatio?
+
An event or condition (e.g. lead status change, new ticket, date/time event) that initiates an automated workflow. :contentReference[oaicite:22]{index=22}
Workflow versioning or change history in creatio?
+
Changes to workflows can be versioned or logged to allow rollback or audit of modifications.
Would you build a custom app (e.g. invoice management) in creatio without coding?
+
Define entities (Invoice/Payment), fields, relationships, UI layouts, workflows for invoice generation, approval, payment tracking — all via low‑code tools.
Would you ensure data integrity and avoid duplicates in creatio when many integrations feed data?
+
Use validation rules, deduplication logic, unique fields, audit logs, regular data cleanup, and possibly API‑side checks.
Would you implement a custom reporting module combining data from sales, service, and marketing in creatio?
+
Use cross‑entity queries or custom entities, aggregations, define filters, build dashboards, schedule report generation and export.
Would you implement data backup & disaster recovery for a creatio deployment?
+
Schedule regular backups, store off‑site, export critical data, plan failover, document restoration process and test periodically.
Would you implement sla‑driven customer service workflow in creatio?
+
Design SLA rules, assign case priorities, set timers/triggers, escalate cases on breach, send notifications, track resolution and compliance.
Would you integrate creatio with a third‑party billing or invoicing system?
+
Use REST API or built‑in connectors, map invoice/order data, design synchronization workflows, handle errors and updates.
Would you integrate creatio with an erp for order fulfillment?
+
Use Creatio APIs or connectors to sync orders, customer data, statuses; set up workflows to push/pull data, manage order lifecycle and inventory.
Would you manage user roles and permissions for a global company using creatio?
+
Define hierarchical roles, restrict data by region or business unit, implement least‑privilege principle, audit permissions regularly.
Would you migrate 100,000 leads into creatio from legacy system?
+
Perform data cleaning, mapping, batch import via CSV/API, validate imported data, test workflows, use sandbox first, then go live in phases.
Would you onboard non‑technical users to use creatio effectively?
+
Provide role‑based training, use step‑by‑step guides, give sandbox access, deliver mentorship, keep UI simple, and provide support documentation.
Would you plan disaster recovery and backup strategy for a global creatio deployment?
+
Define backup frequency, off‑site storage, restore procedures, failover servers, periodic DR drills.
You document crm customizations, workflows, data model for future maintenance when using creatio?
+
Maintain documentation repositories, version control of workflows, schema diagrams, change logs, and periodic reviews.
You ensure data consistency when multiple external systems sync to creatio?
+
Implement validation rules, transactional updates, conflict resolution logic, logging and monitoring for integration actions.
You ensure high availability for a critical creatio deployment (global enterprise)?
+
Use cloud hosting with redundancy, regular backups, failover setup, monitoring, scaling resources as needed, and disaster recovery planning.
You ensure performance and scalability when many workflows run simultaneously in creatio?
+
Optimize workflows, avoid heavy loops, batch operations, archive old data, monitor performance metrics, and scale resources as needed.
You handle data migration when business structure changes (e.g. reorganization of departments) in creatio?
+
Map old data to new structure, update entities/relationships, preserve history, test workflows, update permissions, inform users.
You handle gdpr / data‑privacy compliance when using creatio for eu customers?
+
Implement consent tracking, data retention policies, role‑based access, audit logs, anonymization, and document data handling procedures.
You handle multi‑tenant or multi‑subsidiary business using single creatio instance?
+
Use role & access isolation, custom entities for subsidiaries, partition data logically, implement permissions per tenant.
You handle subscription billing and renewals using creatio plus external billing module?
+
Use workflows for renewal reminder, integrate with billing system via API, create orders/invoices, track status — ensure data sync.
You handle version control and change management for workflows and customisations in creatio?
+
Maintain version history, use sandbox for testing, document changes, get approvals, deploy in stages, keep rollback plan.
You integrate external web forms/landing pages with creatio lead capture?
+
Use REST API or webhooks, map form fields to Creatio entities, validate input, create lead record automatically, trigger follow‑up workflows.
You manage data archive, cleanup of old records to maintain performance in creatio?
+
Define retention policies, archive or delete old data, purge logs, use separate storage/archival, monitor DB size/performance.
You manage security and access control for sensitive data (e.g. customer financials) in creatio?
+
Use field‑level permissions, role‑based access, encryption (if supported), audit logging, and restrict export options.
You merge records and manage duplicates in large datasets inside creatio?
+
Use deduplication tools, merge function, validation rules, manual review for ambiguous cases, and audit trail of merges.
You monitor system health, workflow execution metrics, and usage analytics in creatio?
+
Use built-in analytics, custom dashboards, logs for errors/performance, user activity reports, alerting on failures or heavy loads.
You onboard new teams or departments into existing creatio instance with minimal disruption?
+
Use phased rollout, training sessions, permission management, custom dashboards per department, and pilot user feedback.
You plan for system maintenance and upgrades in creatio used heavily with custom workflows and integrations?
+
Schedule maintenance windows, backup data, test upgrades in sandbox, update integrations, communicate with users, rollback plan if needed.
You support multi‑currency and global sales operations in creatio?
+
Configure currency fields, exchange rates, localizations, regional permissions, and adapt workflows per region.

C#

+
.NET?
+
A framework that provides runtime, libraries, and tools for building applications.
?. operator?
+
Null conditional operator to avoid NullReferenceException.
“throw” vs “throw ex”
+
throw preserves original stack trace., throw ex resets stack trace.
Abstract class?
+
A class that cannot be instantiated and may contain abstract members.
Abstraction?
+
Exposing essential features while hiding implementation details.
Accessibility in interface
+
All members in an interface are implicitly public., No need for modifiers because interfaces define a contract.
ADO.NET?
+
Data access framework for .NET.
Anonymous method?
+
Inline method declared without a name.
Anonymous Types in C#?
+
Anonymous types allow creating objects without defining a class. They are mostly used with LINQ queries to store temporary data. Example: var person = new { Name = "John", Age = 30 };.
ArrayList?
+
Non-generic dynamic array.
Arrays in C#?
+
Arrays are fixed-size, strongly-typed collections that store elements of the same type., They provide indexed access and are stored in contiguous memory.
Async stream?
+
Async iteration using IAsyncEnumerable.
Async/await?
+
Keywords for asynchronous programming.
Attribute in C#?
+
Metadata added to assemblies, classes, or members.
Attributes
+
Metadata added to code elements., Used for runtime behavior control., Example: [Obsolete], [Serializable].
Auto property?
+
Property with implicit backing field.
Base class for all classes
+
System.Object is the root base class in .NET., All classes derive from it directly or indirectly.
Base keyword?
+
Used to call base class members.
Boxing and Unboxing:
+
Boxing converts a value type to an object type. Unboxing extracts that value back from the object. Boxing is slower and stored on heap.
Boxing?
+
Converting value type to object/reference type.
C#?
+
A modern, object-oriented programming language developed by Microsoft.
C#?
+
C# is an object-oriented programming language developed by Microsoft. It is used to build applications for web, desktop, cloud, and mobile platforms. It runs on the .NET framework.
C#? Latest version?
+
C# is an object-oriented programming language from Microsoft built on .NET. It supports strong typing, inheritance, and modern features like LINQ and async. The latest version (as of 2025) is C# 13.
Can “this” be used within a static method?
+
No, the this keyword cannot be used inside a static method., Static methods belong to the class, not to a specific object instance., Since this refers to the current instance, it is only valid in instance methods.
Can a private virtual method be overridden?
+
No, because private methods are not accessible in derived classes and virtual methods require inheritance.
Can multiple catch blocks be executed?
+
No, only one catch block executes—the one that matches the thrown exception. Other catch blocks are ignored.
Can multiple catch blocks execute?
+
No, only one matching catch block executes in a try-catch structure., The first matching exception handler is executed and others are skipped.
Can we use “this” keyword within a static method?
+
No, because this refers to the current instance, and static methods belong to the class—not an object.
Circular references
+
Occur when two or more objects reference each other., This prevents objects from being garbage collected., Common in linked structures., Requires proper cleanup strategies.
Class vs struct?
+
Class is reference type; struct is value type.
Class?
+
Blueprint for creating objects.
CLR?
+
Common Language Runtime; manages execution, memory, security, and threading.
CLS?
+
Common Language Specification; rules for .NET language interoperability.
Common exception types
+
NullReferenceException, IndexOutOfRangeException, DivideByZeroException, FormatException, InvalidOperationException
Conflicting interface method names
+
Implement explicitly by specifying interface name:, void IInterface1.Method() { }, void IInterface2.Method() { }
Conflicting methods in inherited interfaces:
+
If interfaces have identical method signatures, only one implementation is needed., If behavior differs, explicit interface implementation must be used.
Console application
+
Runs in command-line interface., No GUI., Used for scripting or service apps.
Constant vs Readonly:
+
const is compile-time constant and cannot change after compilation. readonly can be assigned at runtime (constructor). const is static by default.
Constructor chaining?
+
Constructor chaining allows one constructor to call another within the same class using this()., It helps avoid duplicate code and centralize initialization logic.
Constructor?
+
Method invoked when an object is created.
Continue vs Break:
+
continue skips remaining loop code and moves to next iteration. break exits the loop entirely. Both control loop execution flow.
Contravariance?
+
Allows base types where derived types expected.
Covariance?
+
Allows derived types more liberally.
Create array with non-default values
+
int[] arr = Enumerable.Repeat(5, 10).ToArray();
CTS?
+
Common Type System; defines how data types are declared and used.
Custom Control and User Control?
+
User control is built by combining existing controls (drag and drop)., Custom control is created from scratch and reused across applications.
Custom exception?
+
User-defined exception class.
Custom Exceptions
+
User-defined exceptions for specific application errors., Created by inheriting Exception class., Helps make error handling meaningful and readable., Used to represent domain-specific failures.
Deadlock?
+
Two threads waiting forever for each other’s lock.
Define Constructors
+
A constructor is a special method that initializes objects when created. It has the same name as the class and doesn’t return a value.
Delegate?
+
Type-safe function pointer.
Delegates
+
A delegate is a type that holds a reference to a method., Enables event handling and callback mechanisms., Supports type safety and encapsulation of method calls., Similar to function pointers in C++.
Dependency injection?
+
Design pattern for providing external dependencies.
Describe the accessibility modifier “protected internal”.
+
It means the member can be accessed within the same assembly or from derived classes in other assemblies.
Deserialization?
+
Converting serialized data back to object.
Destructor?
+
Method called before an object is destroyed by GC.
Dictionary?
+
Key-value collection.
DifBet abstract class and interface?
+
Abstract class can have implementation; interface cannot (before C# 8).
DifBet Array & List?
+
Array has fixed size; List grows dynamically.
DifBet C# and .NET?
+
C# is a programming language; .NET is the runtime and framework.
DifBet const and readonly?
+
const is compile-time constant; readonly is runtime constant.
DifBet Dictionary and Hashtable?
+
Dictionary is generic and faster.
DifBet IEnumerable and IQueryable?
+
IEnumerable executes in memory; IQueryable executes in database.
DifBet ref and out?
+
ref requires initialization; out does not.
DifBet Task and Thread?
+
Task is a higher-level abstraction running on thread pool; Thread is OS-level.
DiffBet “is” and “as”
+
is checks compatibility., as tries casting and returns null if fails, no exception.
DiffBet == and Equals():
+
== checks reference equality for objects and value equality for value types., Equals() can be overridden for custom comparison logic.
DiffBet Array and ArrayList:
+
Array has fixed size and stores a single data type., ArrayList is dynamic and stores objects, requiring boxing/unboxing for value types.
DiffBet Array and ArrayList?
+
Array has fixed size and stores same data type., ArrayList can grow dynamically and stores mixed types.
DiffBet Array.CopyTo() and Array.Clone()
+
Clone() creates a shallow copy of the array including its size., CopyTo() copies elements into an existing array starting at a specified index., Clone() returns a new array of the same type., CopyTo() requires the destination array to be allocated beforehand.
DiffBet Array.CopyTo() and Array.Clone():
+
CopyTo() copies array elements to an existing array., Clone() creates a shallow copy of the entire array as a new instance.
DiffBet boxing and unboxing:
+
Boxing converts a value type to a reference type (object)., Unboxing converts the object back to its original value type., Boxing is implicit; unboxing must be explicit and can cause runtime errors if mismatched.
DiffBet constants and read-only?
+
const must be assigned at compile time and cannot change., readonly can be assigned at runtime, usually in a constructor.
DiffBet Dispose and Finalize in C#:
+
Dispose() is called manually to release unmanaged resources using IDisposable., Finalize() (destructor) is called automatically by the Garbage Collector., Dispose provides deterministic cleanup, while Finalize is non-deterministic and slower.
DiffBet Finalize() and Dispose()
+
Finalize() is called by the garbage collector and cannot be invoked manually., Dispose() is called manually to release unmanaged resources., Finalize() has performance overhead., Dispose() is implemented via IDisposable.
DiffBet IEnumerable and IQueryable:
+
IEnumerable filters data in memory and is suitable for in-memory collections., IQueryable filters data at the database level using expression trees., IQueryable supports remote querying, improving performance for large datasets.
DiffBet interface and abstract class
+
Interface contains only declarations, no implementation (until default methods in new versions)., Abstract class can have both abstract and concrete methods., A class can inherit multiple interfaces but only one abstract class., Interfaces define a contract; abstract classes provide a base.
DiffBet Is and As operators:
+
is checks whether an object is compatible with a type and returns true/false., as performs safe casting and returns null if the cast fails.
DiffBet late and early binding:
+
Early binding occurs at compile time (e.g., method calls on known types)., Late binding happens at runtime (e.g., using dynamic or reflection)., Early binding is faster and type-safe, while late binding is flexible but slower.
DiffBet public, static, and void?
+
public means accessible anywhere., static belongs to the class, not the instance., void means the method does not return any value.
DiffBet ref & out parameters?
+
ref requires the variable to be initialized before passing., out does not require initialization but must be assigned inside the method.
DiffBet String and StringBuilder in C#:
+
String is immutable, meaning every modification creates a new object., StringBuilder is mutable and efficient for repeated string manipulation., StringBuilder is preferred when working with dynamic or large text modifications.
DiffBet System.String and StringBuilder
+
String is immutable, meaning any modification creates a new object., StringBuilder is mutable and allows in-place modifications., StringBuilder is preferred for frequent string operations like concatenation., String is simpler and better for small or static content.
DiffBet Throw Exception and Throw Clause:
+
throw ex; resets the stack trace., throw; preserves the original stack trace, making debugging easier.
DirectCast vs CType
+
DirectCast requires exact type., CType supports conversions defined in VB or framework.
Dynamic keyword?
+
Type resolved at runtime.
Early binding?
+
Object referenced at compile time.
Encapsulation?
+
Binding data and methods inside a class.
Enum:
+
Enum is a value type representing named constants. Helps improve code readability. Default underlying type is integer.
Enum?
+
Value type representing named constants.
Event?
+
Used to provide notifications using delegates.
Exception?
+
Runtime error.
Explain types of comment in C# with examples
+
There are three types:, Single-line: // comment, Multi-line: /* comment */, XML documentation: /// ... used for generating documentation.
Extension method in C#?
+
An extension method adds new functionality to existing classes without modifying them., It is defined in a static class and uses the this keyword before the first parameter., They are commonly used with LINQ and utility enhancements.
Extension method?
+
Adds new methods to existing types without modifying them.
File Handling in C#.Net?
+
File handling allows reading, writing, and manipulating files using classes like File, FileStream, StreamReader, and StreamWriter. It is used to store or retrieve data from physical files.
Finally?
+
Block executed regardless of exception.
Garbage collection?
+
Automatic memory management.
GC generations?
+
Gen 0, Gen 1, Gen 2.
Generic type?
+
Allows type parameters for safe and reusable code.
Generics in .NET
+
Generics allow type-safe collections without boxing/unboxing., They improve performance and reusability., Examples: List , Dictionary., They enable compile-time type checking.
Generics?
+
Generics allow classes and methods to operate on types without specifying them upfront., They provide type safety and improve performance by avoiding boxing/unboxing.
HashSet?
+
Collection of unique items.
Hashtable in C#?
+
A Hashtable stores key-value pairs and provides fast access using a hash key. Keys are unique, and values can be of any type. It belongs to System.Collections.
Hashtable?
+
Non-generic key-value collection.
How do you use the “using” statement in C#?
+
The using statement ensures that resources like files or database connections are properly closed and disposed after use. It helps prevent memory leaks by automatically calling Dispose(). Example: using(StreamReader sr = new StreamReader("file.txt")) { }.
How to inherit a class
+
class B : A, {, }
How to prevent SQL Injection?
+
Use parameterized queries.
How to use Nullable<> Types?
+
Nullable types allow value types (like int) to store null using Nullable or ?., Example: int? age = null;.
ICollection?
+
Extends IEnumerable with add/remove operations.
IDisposable?
+
Interface to release unmanaged resources.
IEnumerable vs IEnumerator?
+
IEnumerable returns enumerator; IEnumerator iterates items.
IEnumerable?
+
Interface for forward-only iteration.
IEnumerable<> in C#?
+
IEnumerable is an interface used to iterate through a collection using foreach., It supports forward-only iteration and deferred execution., It does not support querying or modifying items directly.
In keyword?
+
Pass parameter by readonly reference.
Indexer?
+
Allows objects to be indexed like arrays.
Indexers
+
Allow a class to be accessed like an array., public string this[int index] { get; set; }
Indexers?
+
Indexers allow objects to be accessed like arrays using brackets []., They provide dynamic access to internal data without exposing underlying collections.
Inherit class but prevent method override
+
Use sealed keyword on the method., public sealed override void Method() { }
Inheritance?
+
Mechanism to derive new classes from existing classes.
Interface class? Give an example
+
An interface contains declarations of methods without implementation. Classes must implement them., Example: interface IShape { void Draw(); }.
Interface vs Abstract Class:
+
Interface only declares members; no implementation (until default implementations in newer versions). Abstract class can have both abstract and concrete members. A class can implement multiple interfaces but inherit only one abstract class.
Interface?
+
Contract containing method signatures without implementation.
IOC container?
+
Automates dependency injection and object creation.
IQueryable?
+
Supports LINQ queries for remote data sources.
Jagged Array in C#?
+
A jagged array is an array of arrays where each sub-array can have different lengths. It provides flexibility if the data structure doesn't need uniform size. Example: int[][] jagged = new int[2][]; jagged[0]=new int[3]; jagged[1]=new int[5];.
Jagged Arrays?
+
A jagged array is an array containing different-sized sub-arrays. It provides flexibility in storing uneven data structures.
JIT compiler?
+
Converts IL code to machine code at runtime.
JSON serialization?
+
Using System.Text.Json or Newtonsoft.Json to serialize objects.
Lambda expression?
+
Short syntax for writing inline methods/functions.
Late binding?
+
Object created at runtime instead of compile time.
LINQ in C#?
+
LINQ (Language Integrated Query) is a feature used to query data from collections, databases, XML, etc., using a unified syntax. It improves readability and reduces code. Example: var result = from x in list where x > 10 select x;.
LINQ?
+
Language Integrated Query for querying collections and databases.
List?
+
Generic list that stores strongly typed items.
Lock keyword?
+
Prevents multiple threads from accessing critical code section.
Managed or unmanaged?
+
C# code is managed because it runs under CLR.
Managed vs Unmanaged Code:
+
Managed code runs under CLR with garbage collection and memory management. Unmanaged code runs directly on OS without CLR support (like C/C++). Managed code is safer but slower.
Method overloading?
+
Multiple methods with the same name but different parameters.
Method overloading?
+
Method overloading allows multiple methods with the same name but different parameters. It improves flexibility and readability.
Method overriding?
+
Redefining base class methods in derived class using virtual/override.
Monitor?
+
Provides advanced locking features.
MSIL?
+
Microsoft Intermediate Language generated before JIT.
Multicast delegate
+
A delegate that can reference multiple methods., Invokes them in order., Used in event handling.
Multicast delegate?
+
Delegate that references multiple methods.
Multicast delegate?
+
A multicast delegate holds references to multiple methods., When invoked, it executes all assigned methods in order.
Multithreading with .NET?
+
Multithreading allows a program to run multiple tasks simultaneously, improving performance and responsiveness. In .NET, threads can be created using the Thread class or Task Parallel Library. It is commonly used in applications requiring background processing.
Mutex?
+
Synchronization primitive across processes.
Namespace?
+
A logical grouping of classes and other types.
Nnullable type?
+
Value type that can hold null using ? syntax.
Nullable types
+
int? x = null;, Used to store value types with null support.
Null-Coalescing operator ??
+
Returns right operand if left operand is null.
Object pool
+
Object pooling reuses a set of pre-created objects., Improves performance by avoiding costly object creation., Common in high-performance applications., Useful for objects with expensive initialization.
Object Pooling?
+
Object pooling reuses frequently used objects instead of creating new ones., It improves performance by reducing memory allocation and garbage collection.
Object?
+
Instance of a class.
Object?
+
An object is an instance of a class containing data and behavior. It represents real-world entities in OOP. Objects interact using methods and properties.
Object?
+
An object is an instance of a class that contains data and behavior. It represents a real-world entity like student, car, or bank account.
Out keyword?
+
Pass parameter by reference but must be assigned inside method.
Overloading vs overriding
+
Overloading: same method name, different parameters., Overriding: derived class changes base class implementation., Overloading happens at compile time; overriding at runtime., Overriding requires virtual and override keywords.
Override keyword?
+
Used to override a virtual/abstract method.
Partial class?
+
Class definition split across multiple files.
Partial classes and why needed?
+
Partial classes allow a class definition to be split across multiple files., They help in code organization, especially auto-generated code and manual code separation., The compiler combines all partial files into a single class at runtime.
Pattern matching?
+
Technique to match types and conditions.
Polymorphism?
+
Ability of objects to take many forms through inheritance and interfaces.
Preprocessor directive?
+
Instructions to compiler like #if, #region.
Properties in C#?
+
Properties are class members used to read, write, or compute values., They provide controlled access to private fields using get and set accessors., Properties improve encapsulation and help enforce validation on assignment.
Property?
+
Getter/setter wrapper for fields.
Race Condition?
+
Conflict when multiple threads access shared data.
Readonly?
+
Variable that can only be assigned in constructor.
Record type?
+
Immutable reference type introduced in C# 9.
Ref keyword?
+
Pass parameter by reference.
Ref vs out:
+
ref requires variable initialization before passing. out does not require initialization but must be assigned inside the method. Both pass arguments by reference.
Reflection in C#?
+
Reflection allows inspecting and interacting with metadata (methods, properties, types) at runtime. It is used in frameworks, serialization, and dynamic object creation using System.Reflection.
Reflection?
+
Inspecting metadata and creating objects dynamically.
Remove element from queue
+
queue.Dequeue();
Role of Access Modifiers:
+
Access modifiers control visibility of classes and members., Examples include public, private, protected, and internal to enforce encapsulation.
Sealed class?
+
Class that cannot be inherited.
Sealed classes in C#?
+
A sealed class prevents further inheritance., It is used when modifications through inheritance should be restricted., sealed can also be applied to methods to stop overriding.
Sealed classes in C#?
+
A sealed class prevents inheritance. It is used to stop modification of behavior. Example: sealed class A { }.
Sealed method?
+
Method that cannot be overridden.
Semaphore?
+
Limits number of threads accessing a resource.
Serialization in C#?
+
Serialization is the process of converting an object into a format like XML, JSON, or binary for storage or transfer. It allows objects to be saved to files, memory, or sent over a network. Deserialization is the reverse, which reconstructs the object from serialized data.
Serialization?
+
Converting objects to JSON, XML, or binary.
Serialization?
+
Serialization converts an object into a storable or transferable format like JSON, XML, or binary. It is used for saving or transmitting data.
Singleton pattern
+
public class Singleton {, private static readonly Singleton instance = new Singleton();, private Singleton() {}, public static Singleton Instance => instance;, }
Singleton Pattern and implementation?
+
Singleton ensures only one instance of a class exists globally., It is implemented using a private constructor, a static field, and a public static instance property.
Sorting array in descending order
+
Array.Sort(arr);, Array.Reverse(arr);
SQL Injection?
+
Attack where malicious SQL is injected.
Static class?
+
Class that cannot be instantiated and contains only static members.
Static constructor?
+
Initializes static members of a class.
Static variable?
+
Shared among all instances of a class.
Struct vs class
+
Struct is value type; class is reference type., Structs stored on stack; classes stored on heap., Structs cannot inherit but can implement interfaces., Classes support full inheritance.
Struct vs Class:
+
Structs are value types and stored on stack; classes are reference types and stored on heap. Structs do not support inheritance. Classes support features like virtual methods.
Struct?
+
Value type used to store small data structures.
Syntax to catch an exception
+
try, {, // Code, }, catch(Exception ex), {, // Handle exception, }
Task in C#?
+
Represents an asynchronous operation.
This keyword?
+
Refers to the current instance.
Thread pool?
+
Managed pool of threads used by tasks.
Thread?
+
Smallest unit of execution.
Throw?
+
Used to raise an exception.
Try/catch?
+
Used to handle exceptions.
Tuple in C#?
+
A lightweight data structure with multiple values.
Unboxing?
+
Extracting value type from object.
Use of ‘using’ statement in C#?
+
It ensures automatic cleanup of resources by calling Dispose() when the scope ends. Useful for files, streams, and database connections.
Use of a delegate in C#:
+
A delegate represents a reference to a method., It allows methods to be passed as parameters and supports callback mechanisms., Delegates enable event handling and implement loose coupling.
Using statement?
+
Ensures IDisposable resources are disposed automatically.
Value types and reference types?
+
Value types store data directly (int, float, bool)., Reference types store memory addresses to objects (class, array, string).
Var?
+
Implicit local variable type inferred at compile time.
Virtual method?
+
Method that can be overridden in derived class.
Virtual Method?
+
A virtual method allows derived classes to override its implementation., It supports runtime polymorphism.
Ways a method can be overloaded:
+
Overloading can be done by changing:, ✓ Number of parameters, ✓ Type of parameters, ✓ Order of parameters
Ways to overload a method
+
Change number of parameters., Change data type of parameters., Change order of parameters (only if type differs).
What type of language is C#?
+
Strongly typed, object-oriented, component-oriented.
Yield keyword?
+
Return sequence of values without storing all items.

DDD (Domain-Driven Design)

+
Advantage of ddd?
+
Aligns software design with business rules, improves maintainability, and supports complex domains effectively.
Aggregate?
+
A cluster of related entities and value objects treated as a single unit for consistency.
Bounded context?
+
A boundary defining where a specific domain model applies. Prevents ambiguity in large systems with multiple models.
Ddd supports microservices?
+
By defining bounded contexts, each microservice can own its domain model and database, reducing coupling.
Ddd?
+
DDD is an approach to software design focusing on core domain logic, modeling real-world business processes, and aligning software structure with business needs.
Diffbet ddd and traditional layered architecture?
+
DDD emphasizes domain and business logic first, while traditional layers often prioritize technical layers like UI, DB, and service.
Domain event?
+
An event representing something significant that happens in the domain, triggering reactions in other parts of the system.
Entity in ddd?
+
An object with a unique identity that persists over time, e.g., Customer with a unique ID.
Repository in ddd?
+
A pattern for persisting and retrieving aggregates while abstracting data storage details.
Value object?
+
An object defined by attributes rather than identity. Immutable and used to describe aspects of entities, e.g., Address.

Design Pattern

+
Adapter pattern example
+
Adapter converts one interface to another that clients expect. Example: converting a legacy XML service to JSON API format.
Advantages of design patterns
+
Improve reusability, maintainability, readability, and communication between developers.
Avoid design patterns
+
Avoid them when they add unnecessary complexity. Overuse may make simple code overly abstract or harder to understand.
Behavioral patterns
+
Observer, Strategy, Iterator, Command, Mediator, Template Method, Chain of Responsibility.
Bridge vs adapter pattern
+
Adapter works with existing code to make incompatible interfaces work together, while Bridge separates abstraction from implementation to scale systems.
Command pattern in ui
+
Command objects encapsulate UI actions like Copy, Paste, Undo. They can be queued, logged, or undone.
Creational patterns
+
Singleton, Factory, Abstract Factory, Prototype, Builder.
Decorator pattern example
+
Adding features like encryption or compression to a file stream dynamically without modifying the original class.
Dependency inversion principle
+
High-level modules should depend on abstractions, not concrete classes. DI containers and patterns like Factory and Strategy help achieve loose coupling.
Design pattern?
+
A reusable solution to a common programming problem. It provides best practices for structuring code.
Design patterns are used in java’s jdk?
+
JDK uses several patterns such as Singleton (Runtime), Factory (Calendar.getInstance()), Strategy (Comparator), Iterator (Iterator interface), and Observer (Listener model in Swing). These patterns solve reusable design challenges in library features.
Design patterns vs algorithms
+
Algorithms solve computational tasks while design patterns solve architectural design problems. Algorithms have fixed steps; patterns are flexible templates.
Design principles vs patterns
+
Principles guide how to write good code (SOLID), while patterns provide reusable proven solutions.
Factory method pattern example
+
Factory Method creates objects without exposing creation logic. Example: Calendar.getInstance() or creating different document types based on input.
Gang of four?
+
Gang of Four (GoF) refers to four authors who wrote the book "Design Patterns: Elements of Reusable Object-Oriented Software" in 1994. They introduced 23 standard design patterns widely used in software development.
Inversion of control?
+
IoC means the framework controls object creation and lifecycle rather than the programmer. Commonly implemented via Dependency Injection.
Observer pattern
+
Observer allows objects (observers) to get notified automatically when the subject changes state. Used in event-driven systems like Java Swing listeners.
Open/closed principle
+
Classes should be open for extension but closed for modification. Design patterns like Strategy, Decorator, and Template enforce this principle.
Patterns help in refactoring
+
Patterns reduce duplication, simplify logic, improve scalability, and make code modular when refactoring legacy systems.
Prevent over-engineering
+
Use patterns only when they solve a real problem. Follow YAGNI ("You Aren’t Gonna Need It") and refactor gradually.
Purpose of uml in design patterns
+
UML diagrams visualize relationships, responsibilities, and structure of design patterns, aiding understanding and implementation.
Real-world singleton example
+
java.lang.Runtime and logging frameworks like Log4j use Singleton to manage shared resources across the application.
Role of design patterns
+
They provide reusable solutions to common software problems and promote flexibility, maintainability, and scalability.
Scenario: command vs strategy pattern
+
Command is better when you need undo/redo, queueing actions, or macro commands in UI. Strategy is better when switching between interchangeable algorithms.
Single responsibility principle
+
SRP states that a class should have only one reason to change. It improves maintainability, readability, and testing in software design.
Singleton pattern & when to use?
+
Singleton ensures only one instance of a class exists and provides a global point of access. Used in logging, configuration settings, caching, or database connection management.
Solid principles?
+
SOLID stands for Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, and Dependency Inversion. These principles help make code maintainable, extendable, and loosely coupled.
Strategy pattern example
+
Sorting algorithms (QuickSort, MergeSort, BubbleSort) can be swapped at runtime based on input size or performance needs.
Structural patterns
+
Adapter, Decorator, Composite, Proxy, Facade, Bridge, Flyweight.
Types of design patterns
+
Creational, Structural, and Behavioral.

DevOps Commands Cheat Sheet

+
Basic Linux Commands -
+

Linux is the foundation of DevOps operations - it's like a Swiss Army knife for servers. These commands help you navigate systems, manage files, configure permissions, and automate tasks in terminal environments.

1. pwd - Print the current working directory.

2. ls - List files and directories.

3. cd - Change directory.

4. touch - Create an empty file.

5. mkdir - Create a new directory.

6. rm - Remove files or directories.

7. rmdir - Remove empty directories.

8. cp - Copy files or directories.

9. mv - Move or rename files and directories.

10. cat - Display the content of a file.

11. echo - Display a line of text.

12. clear - Clear the terminal screen.

Intermediate Linux Commands
+

13. chmod - Change file permissions.

14. chown - Change file ownership.

15. find - Search for files and directories.

16. grep - Search for text in a file.

17. wc - Count lines, words, and characters in a file.

18. head - Display the first few lines of a file.

19. tail - Display the last few lines of a file.

20. sort - Sort the contents of a file.

21. uniq - Remove duplicate lines from a file.

22. diff - Compare two files line by line.

23. tar - Archive files into a tarball.

24. zip/unzip - Compress and extract ZIP files.

25. df - Display disk space usage.

26. du - Display directory size.

27. top - Monitor system processes in real time.

28. ps - Display active processes.

29. kill - Terminate a process by its PID.

30. ping - Check network connectivity.

31. wget - Download files from the internet.

32. curl - Transfer data from or to a server.

33. scp - Securely copy files between systems.

34. rsync - Synchronize files and directories.

Advanced Linux Commands
+

35. awk - Text processing and pattern scanning.

36. sed - Stream editor for filtering and transforming text.

37. cut - Remove sections from each line of a file.

38. tr - Translate or delete characters.

39. xargs - Build and execute command lines from standard input.

40. ln - Create symbolic or hard links.

41. df -h - Display disk usage in human-readable format.

42. free - Display memory usage.

43. iostat - Display CPU and I/O statistics.

44. netstat - Network statistics (use ss as modern alternative).

45. ifconfig/ip - Configure network interfaces (use ip as modern alternative).

46. iptables - Configure firewall rules.

47. systemctl - Control the systemd system and service manager.

48. journalctl - View system logs.

49. crontab - Schedule recurring tasks.

50. at - Schedule tasks for a specific time.

51. uptime - Display system uptime.

52. whoami - Display the current user.

53. users - List all users currently logged in.

54. hostname - Display or set the system hostname.

55. env - Display environment variables.

56. export - Set environment variables.

Networking Commands
+

57. ip addr - Display or configure IP addresses.

58. ip route - Show or manipulate routing tables.

59. traceroute - Trace the route packets take to a host.

60. nslookup - Query DNS records.

61. dig - Query DNS servers.

62. ssh - Connect to a remote server via SSH.

63. ftp - Transfer files using the FTP protocol.

64. nmap - Network scanning and discovery.

65. telnet - Communicate with remote hosts.

66. netcat (nc) - Read/write data over networks.

File Management and Search
+

67. locate - Find files quickly using a database.

68. stat - Display detailed information about a file.

69. tree - Display directories as a tree.

70. file - Determine a file’s type.

71. basename - Extract the filename from a path.

72. dirname - Extract the directory part of a path.

System Monitoring
+

73. vmstat - Display virtual memory statistics.

74. htop - Interactive process viewer (alternative to top).

75. lsof - List open files.

76. dmesg - Print kernel ring buffer messages.

77. uptime - Show how long the system has been running.

78. iotop - Display real-time disk I/O by processes.

Package Management
+

79. apt - Package manager for Debian-based distributions.

80. yum/dnf - Package manager for RHEL-based distributions.

81. snap - Manage snap packages.

82. rpm - Manage RPM packages.

Disk and Filesystem
+

83. mount/umount - Mount or unmount filesystems.

84. fsck - Check and repair filesystems.

85. mkfs - Create a new filesystem.

86. blkid - Display information about block devices.

87. lsblk - List information about block devices.

88. parted - Manage partitions interactively.

Scripting and Automation
+

89. bash - Command interpreter and scripting shell.

90. sh - Legacy shell interpreter.

91. cron - Automate tasks.

92. alias - Create shortcuts for commands.

93. source - Execute commands from a file in the current shell.

Development and Debugging
+

94. gcc - Compile C programs.

95. make - Build and manage projects.

96. strace - Trace system calls and signals.

97. gdb - Debug programs.

98. git - Version control system.

99. vim/nano - Text editors for scripting and editing.

Other Useful Commands
+

100. uptime - Display system uptime.

101. date - Display or set the system date and time.

102. cal - Display a calendar.

103. man - Display the manual for a command.

104. history - Show previously executed commands.

105. alias - Create custom shortcuts for commands.

Basic Git Commands
+

Git is your code time machine. It tracks every change, enables team collaboration without conflicts, and lets you undo mistakes. These commands help manage source code versions like a professional developer.

1. git init

Initializes a new Git repository in the current directory. Example: git init

2. git clone

Copies a remote repository to the local machine.

Example: git clone https://github.com/user/repo.git

3. git status

Displays the state of the working directory and staging area. Example: git status

4. git add

Adds changes to the staging area. Example: git add file.txt

5. git commit

Records changes to the repository.

Example: git commit -m "Initial commit"

6. git config

Configures user settings, such as name and email.

Example: git config --global user.name "Your Name"

7. git log

Shows the commit history. Example: git log

8. git show

Displays detailed information about a specific commit. Example: git show

9. git diff

Shows changes between commits, the working directory, and the staging area. Example: git diff

10. git reset

Unstages changes or resets commits. Example: git reset HEAD file.txt

Branching and Merging
+

11. git branch

Lists branches or creates a new branch. Example: git branch feature-branch

12. git checkout

Switches between branches or restores files. Example: git checkout feature-branch

13. git switch

Switches branches (modern alternative to git checkout). Example: git switch feature-branch

14. git merge

Combines changes from one branch into another. Example: git merge feature-branch

15. git rebase

Moves or combines commits from one branch onto another. Example: git rebase main

16. git cherry-pick

Applies specific commits from one branch to another. Example: git cherry-pick

Remote Repositories
+

17. git remote

Manages remote repository connections.

Example: git remote add origin https://github.com/user/repo.git

18. git push

Sends changes to a remote repository. Example: git push origin main

19. git pull

Fetches and merges changes from a remote repository. Example: git pull origin main

20. git fetch

Downloads changes from a remote repository without merging. Example: git fetch origin

21. git remote -v

Lists the URLs of remote repositories. Example: git remote -v

Stashing and Cleaning
+

22. git stash

Temporarily saves changes not yet committed. Example: git stash

23. git stash pop

Applies stashed changes and removes them from the stash list. Example: git stash pop

24. git stash list

Lists all stashes.

Example: git stash list

25. git clean

Removes untracked files from the working directory. Example: git clean -f

Tagging
+

26. git tag

Creates a tag for a specific commit.

Example: git tag -a v1.0 -m "Version 1.0"

27. git tag -d

Deletes a tag.

Example: git tag -d v1.0

28. git push --tags

Pushes tags to a remote repository. Example: git push origin --tags

Advanced Commands
+

29. git bisect

Finds the commit that introduced a bug. Example: git bisect start

30. git blame

Shows which commit and author modified each line of a file. Example: git blame file.txt

31. git reflog

Shows a log of changes to the tip of branches. Example: git reflog

32. git submodule

Manages external repositories as submodules.

Example: git submodule add https://github.com/user/repo.git

33. git archive

Creates an archive of the repository files.

Example: git archive --format=zip HEAD > archive.zip

34. git gc

Cleans up unnecessary files and optimizes the repository. Example: git gc

GitHub-Specific Commands
+

35. gh auth login

Logs into GitHub via the command line. Example: gh auth login

36. gh repo clone

Clones a GitHub repository.

Example: gh repo clone user/repo

37. gh issue list

Lists issues in a GitHub repository. Example: gh issue list

38. gh pr create

Creates a pull request on GitHub.

Example: gh pr create --title "New Feature" --body "Description of the feature"

39. gh repo create

Creates a new GitHub repository. Example: gh repo create my-repo

Basic Docker Commands -
+

Docker packages applications into portable containers - like shipping containers for software. These commands help build, ship, and run applications consistently across any environment.

1. docker --version

Displays the installed Docker version. Example: docker --version

2. docker info

Shows system-wide information about Docker, such as the number of containers and images.

Example: docker info

3. docker pull

Downloads an image from a Docker registry (default: Docker Hub). Example: docker pull ubuntu:latest

4. docker images

Lists all downloaded images. Example: docker images

5. docker run

Creates and starts a new container from an image. Example: docker run -it ubuntu bash

6. docker ps

Lists running containers. Example: docker ps

7. docker ps -a

Lists all containers, including stopped ones. Example: docker ps -a

8. docker stop

Stops a running container.

Example: docker stop container_name

9. docker start

Starts a stopped container.

Example: docker start container_name

10. docker rm

Removes a container.

Example: docker rm container_name

11. docker rmi

Removes an image.

Example: docker rmi image_name

12. docker exec

Runs a command inside a running container.

Example: docker exec -it container_name bash

Intermediate Docker Commands
+

13. docker build

Builds an image from a Dockerfile.

Example: docker build -t my_image .

14. docker commit

Creates a new image from a container’s changes.

Example: docker commit container_name my_image:tag

15. docker logs

Fetches logs from a container.

Example: docker logs container_name

16. docker inspect

Returns detailed information about an object (container or image). Example: docker inspect container_name

17. docker stats

Displays live resource usage statistics of running containers. Example: docker stats

18. docker cp

Copies files between a container and the host.

Example: docker cp container_name:/path/in/container

/path/on/host

19. docker rename

Renames a container.

Example: docker rename old_name new_name

20. docker network ls

Lists all Docker networks. Example: docker network ls

21. docker network create

Creates a new Docker network.

Example: docker network create my_network

22. docker network inspect

Shows details about a Docker network.

Example: docker network inspect my_network

23. docker network connect

Connects a container to a network.

Example: docker network connect my_network container_name

24. docker volume ls

Lists all Docker volumes. Example: docker volume ls

25. docker volume create

Creates a new Docker volume.

Example: docker volume create my_volume

26. docker volume inspect

Provides details about a volume.

Example: docker volume inspect my_volume

27. docker volume rm

Removes a Docker volume.

Example: docker volume rm my_volume

Advanced Docker Commands
+

28. docker-compose up

Starts services defined in a docker-compose.yml file. Example: docker-compose up

29. docker-compose down

Stops and removes services defined in a docker-compose.yml file. Example: docker-compose down

30. docker-compose logs

Displays logs for services managed by Docker Compose. Example: docker-compose logs

31. docker-compose exec

Runs a command in a service’s container.

Example: docker-compose exec service_name bash

32. docker save

Exports an image to a tar file.

Example: docker save -o my_image.tar my_image:tag

33. docker load

Imports an image from a tar file.

Example: docker load < my_image.tar

34. docker export

Exports a container’s filesystem as a tar file.

Example: docker export container_name > container.tar

35. docker import

Creates an image from an exported container.

Example: docker import container.tar my_new_image

36. docker system df

Displays disk usage by Docker objects. Example: docker system df

37. docker system prune

Cleans up unused Docker resources (images, containers, volumes, networks). Example: docker system prune

38. docker tag

Assigns a new tag to an image.

Example: docker tag old_image_name new_image_name

39. docker push

Uploads an image to a Docker registry. Example: docker push my_image:tag

40. docker login

Logs into a Docker registry. Example: docker login

41. docker logout

Logs out of a Docker registry. Example: docker logout

42. docker swarm init

Initializes a Docker Swarm mode cluster. Example: docker swarm init

43. docker service create

Creates a new service in Swarm mode.

Example: docker service create --name my_service nginx

44. docker stack deploy

Deploys a stack using a Compose file in Swarm mode.

Example: docker stack deploy -c docker-compose.yml my_stack

45. docker stack rm

Removes a stack in Swarm mode. Example: docker stack rm my_stack

46. docker checkpoint create

Creates a checkpoint for a container.

Example: docker checkpoint create container_name checkpoint_name

47. docker checkpoint ls

Lists checkpoints for a container.

Example: docker checkpoint ls container_name

48. docker checkpoint rm

Removes a checkpoint.

Example: docker checkpoint rm container_name checkpoint_name

Basic Kubernetes Commands -
+

Kubernetes is the conductor of your container orchestra. It automates deployment, scaling, and management of containerized applications across server clusters.

1. kubectl version

Displays the Kubernetes client and server version. Example: kubectl version --short

2. kubectl cluster-info

Shows information about the Kubernetes cluster. Example: kubectl cluster-info

3. kubectl get nodes

Lists all nodes in the cluster. Example: kubectl get nodes

4. kubectl get pods

Lists all pods in the default namespace. Example: kubectl get pods

5. kubectl get services

Lists all services in the default namespace. Example: kubectl get services

6. kubectl get namespaces

Lists all namespaces in the cluster. Example: kubectl get namespaces

7. kubectl describe pod

Shows detailed information about a specific pod. Example: kubectl describe pod pod-name

8. kubectl logs

Displays logs for a specific pod. Example: kubectl logs pod-name

9. kubectl create namespace

Creates a new namespace.

Example: kubectl create namespace my-namespace

10. kubectl delete pod

Deletes a specific pod.

Example: kubectl delete pod pod-name

Intermediate Kubernetes Commands
+

11. kubectl apply

Applies changes defined in a YAML file.

Example: kubectl apply -f deployment.yaml

12. kubectl delete

Deletes resources defined in a YAML file.

Example: kubectl delete -f deployment.yaml

13. kubectl scale

Scales a deployment to the desired number of replicas.

Example: kubectl scale deployment my-deployment --replicas=3

14. kubectl expose

Exposes a pod or deployment as a service.

Example: kubectl expose deployment my-deployment

--type=LoadBalancer --port=80

15. kubectl exec

Executes a command in a running pod.

Example: kubectl exec -it pod-name -- /bin/bash

16. kubectl port-forward

Forwards a local port to a port in a pod.

Example: kubectl port-forward pod-name 8080:80

17. kubectl get configmaps

Lists all ConfigMaps in the namespace. Example: kubectl get configmaps

18. kubectl get secrets

Lists all Secrets in the namespace. Example: kubectl get secrets

19. kubectl edit

Edits a resource definition directly in the editor.

Example: kubectl edit deployment my-deployment

20. kubectl rollout status

Displays the status of a deployment rollout.

Example: kubectl rollout status deployment/my-deployment

Advanced Kubernetes Commands
+

21. kubectl rollout undo

Rolls back a deployment to a previous revision.

Example: kubectl rollout undo deployment/my-deployment

22. kubectl top nodes

Shows resource usage for nodes. Example: kubectl top nodes

23. kubectl top pods

Displays resource usage for pods. Example: kubectl top pods

24. kubectl cordon

Marks a node as unschedulable.

Example: kubectl cordon node-name

25. kubectl uncordon

Marks a node as schedulable.

Example: kubectl uncordon node-name

26. kubectl drain

Safely evicts all pods from a node.

Example: kubectl drain node-name --ignore-daemonsets

27. kubectl taint

Adds a taint to a node to control pod placement.

Example: kubectl taint nodes node-name key=value:NoSchedule

28. kubectl get events

Lists all events in the cluster. Example: kubectl get events

29. kubectl apply -k

Applies resources from a kustomization directory.

Example: kubectl apply -k ./kustomization-dir/

30. kubectl config view

Displays the kubeconfig file. Example: kubectl config view

31. kubectl config use-context

Switches the active context in kubeconfig.

Example: kubectl config use-context my-cluster

32. kubectl debug

Creates a debugging session for a pod. Example: kubectl debug pod-name

33. kubectl delete namespace

Deletes a namespace and its resources.

Example: kubectl delete namespace my-namespace

34. kubectl patch

Updates a resource using a patch.

Example: kubectl patch deployment my-deployment -p '{"spec":

{"replicas": 2}}'

35. kubectl rollout history

Shows the rollout history of a deployment.

Example: kubectl rollout history deployment my-deployment

36. kubectl autoscale

Automatically scales a deployment based on resource usage. Example: kubectl autoscale deployment my-deployment

--cpu-percent=50 --min=1 --max=10

37. kubectl label

Adds or modifies a label on a resource.

Example: kubectl label pod pod-name environment=production

38. kubectl annotate

Adds or modifies an annotation on a resource.

Example: kubectl annotate pod pod-name description="My app pod"

39. kubectl delete pv

Deletes a PersistentVolume (PV). Example: kubectl delete pv my-pv

40. kubectl get ingress

Lists all Ingress resources in the namespace. Example: kubectl get ingress

41. kubectl create configmap

Creates a ConfigMap from a file or literal values. Example: kubectl create configmap my-config

--from-literal=key1=value1

42. kubectl create secret

Creates a Secret from a file or literal values.

Example: kubectl create secret generic my-secret

--from-literal=password=myPassword

43. kubectl api-resources

Lists all available API resources in the cluster. Example: kubectl api-resources

44. kubectl api-versions

Lists all API versions supported by the cluster. Example: kubectl api-versions

45. kubectl get crds

Lists all CustomResourceDefinitions (CRDs). Example: kubectl get crds

Basic Helm Commands -
+

Helm is the app store for Kubernetes. It simplifies installing and managing complex applications using pre-packaged "charts" - think of it like apt-get for Kubernetes.

1. helm help

Displays help for the Helm CLI or a specific command. Example: helm help

2. helm version

Shows the Helm client and server version. Example: helm version

3. helm repo add

Adds a new chart repository.

Example: helm repo add stable https://charts.helm.sh/stable

4. helm repo update

Updates all Helm chart repositories to the latest version. Example: helm repo update

5. helm repo list

Lists all the repositories added to Helm. Example: helm repo list

6. helm search hub

Searches for charts on Helm Hub. Example: helm search hub nginx

7. helm search repo

Searches for charts in the repositories.

Example: helm search repo stable/nginx

8. helm show chart

Displays information about a chart, including metadata and dependencies. Example: helm show chart stable/nginx

Installing and Upgrading Charts
+

9. helm install

Installs a chart into a Kubernetes cluster.

Example: helm install my-release stable/nginx

10. helm upgrade

Upgrades an existing release with a new version of the chart. Example: helm upgrade my-release stable/nginx

11. helm upgrade --install

Installs a chart if it isn’t installed or upgrades it if it exists.

Example: helm upgrade --install my-release stable/nginx

12. helm uninstall

Uninstalls a release.

Example: helm uninstall my-release

13. helm list

Lists all the releases installed on the Kubernetes cluster. Example: helm list

14. helm status

Displays the status of a release. Example: helm status my-release

Working with Helm Charts
+

15. helm create

Creates a new Helm chart in a specified directory. Example: helm create my-chart

16. helm lint

Lints a chart to check for common errors. Example: helm lint ./my-chart

17. helm package

Packages a chart into a .tgz file. Example: helm package ./my-chart

18. helm template

Renders the Kubernetes YAML files from a chart without installing it. Example: helm template my-release ./my-chart

19. helm dependency update

Updates the dependencies in the Chart.yaml file. Example: helm dependency update ./my-chart

Advanced Helm Commands
+

20. helm rollback

Rolls back a release to a previous version. Example: helm rollback my-release 1

21. helm history

Displays the history of a release. Example: helm history my-release

22. helm get all

Gets all information (including values and templates) for a release. Example: helm get all my-release

23. helm get values

Displays the values used in a release. Example: helm get values my-release

24. helm test

Runs tests defined in a chart. Example: helm test my-release

Helm Chart Repositories
+

25. helm repo remove

Removes a chart repository.

Example: helm repo remove stable

26. helm repo update

Updates the local cache of chart repositories. Example: helm repo update

27. helm repo index

Creates or updates the index file for a chart repository. Example: helm repo index ./charts

Helm Values and Customization
+

28. helm install --values

Installs a chart with custom values.

Example: helm install my-release stable/nginx --values values.yaml

29. helm upgrade --values

Upgrades a release with custom values.

Example: helm upgrade my-release stable/nginx --values values.yaml

30. helm install --set

Installs a chart with a custom value set directly in the command. Example: helm install my-release stable/nginx --set replicaCount=3

31. helm upgrade --set

Upgrades a release with a custom value set.

Example: helm upgrade my-release stable/nginx --set replicaCount=5

32. helm uninstall --purge

Removes a release and deletes associated resources, including the release history. Example: helm uninstall my-release --purge

Helm Template and Debugging
+

33. helm template --debug

Renders Kubernetes manifests and includes debug output. Example: helm template my-release ./my-chart --debug

34. helm install --dry-run

Simulates the installation process to show what will happen without actually installing.

Example: helm install my-release stable/nginx --dry-run

35. helm upgrade --dry-run

Simulates an upgrade process without actually applying it.

Example: helm upgrade my-release stable/nginx --dry-run

Helm and Kubernetes Integration
+

36. helm list --namespace

Lists releases in a specific Kubernetes namespace. Example: helm list --namespace kube-system

37. helm uninstall --namespace

Uninstalls a release from a specific namespace.

Example: helm uninstall my-release --namespace kube-system

38. helm install --namespace

Installs a chart into a specific namespace.

Example: helm install my-release stable/nginx --namespace mynamespace

39. helm upgrade --namespace

Upgrades a release in a specific namespace.

Example: helm upgrade my-release stable/nginx --namespace mynamespace

Helm Chart Development
+

40. helm package --sign

Packages a chart and signs it using a GPG key.

Example: helm package ./my-chart --sign --key my-key-id

41. helm create --starter

Creates a new Helm chart based on a starter template.

Example: helm create --starter https://github.com/helm/charts.git

42. helm push

Pushes a chart to a Helm chart repository. Example: helm push ./my-chart my-repo

Helm with Kubernetes CLI
+

43. helm list -n

Lists releases in a specific Kubernetes namespace. Example: helm list -n kube-system

44. helm install --kube-context

Installs a chart to a Kubernetes cluster defined in a specific kubeconfig context. Example: helm install my-release stable/nginx --kube-context my-cluster

45. helm upgrade --kube-context

Upgrades a release in a specific Kubernetes context.

Example: helm upgrade my-release stable/nginx --kube-context my-cluster

Helm Chart Dependencies
+

46. helm dependency build

Builds dependencies for a Helm chart.

Example: helm dependency build ./my-chart

47. helm dependency list

Lists all dependencies for a chart.

Example: helm dependency list ./my-chart

Helm History and Rollbacks
+

48. helm rollback --recreate-pods

Rolls back to a previous version and recreates pods.

Example: helm rollback my-release 2 --recreate-pods

49. helm history --max

Limits the number of versions shown in the release history. Example: helm history my-release --max 5

Basic Terraform Commands -
+

Terraform lets you build cloud infrastructure with code. Instead of clicking buttons in AWS/GCP/Azure consoles, you define servers and services in configuration files.

50. terraform --help = Displays general help for Terraform CLI commands.

51. terraform init = Initializes the working directory containing Terraform configuration files. It downloads the necessary provider plugins.

52. terraform validate = Validates the Terraform configuration files for syntax errors or issues.

53. terraform plan - Creates an execution plan, showing what actions Terraform will perform to make the infrastructure match the desired configuration.

54. terraform apply = Applies the changes required to reach the desired state of the configuration. It will prompt for approval before making changes.

55. terraform show = Displays the Terraform state or a plan in a human-readable format.

56. terraform output = Displays the output values defined in the Terraform configuration after an apply.

57. terraform destroy = Destroys the infrastructure defined in the Terraform configuration. It prompts for confirmation before destroying resources.

58. terraform refresh = Updates the state file with the real infrastructure's current state without applying changes.

59. terraform taint = Marks a resource for recreation on the next apply. Useful for forcing a resource to be recreated even if it hasn't been changed.

60. terraform untaint = Removes the "tainted" status from a resource.

61. terraform state = Manages Terraform state files, such as moving resources between modules or manually

62. terraform import = Imports existing infrastructure into Terraform management.

63. terraform graph = Generates a graphical representation of Terraform's resources and their relationships.

64. terraform providers = Lists the providers available for the current Terraform configuration.

65. terraform state list = Lists all resources tracked in the Terraform state file.

66. terraform backend = Configures the backend for storing Terraform state remotely (e.g., in S3, Azure Blob Storage, etc.).

67. terraform state mv = Moves an item in the state from one location to another.

68. terraform state rm = Removes an item from the Terraform state file.

69. terraform workspace = Manages Terraform workspaces, which allow for creating separate environments within a single configuration.

70. terraform workspace new = Creates a new workspace.

71. terraform module = Manages and updates Terraform modules, which are reusable configurations.

72. terraform init -get-plugins=true = Ensures that required plugins are fetched and available for modules.

73. TF_LOG = Sets the logging level for Terraform debug output (e.g., TRACE, DEBUG, INFO, WARN, ERROR).

74. TF_LOG_PATH = Directs Terraform logs to a specified file.

75. terraform login = Logs into Terraform Cloud or Terraform Enterprise for managing remote backends and workspaces.

76. terraform remote = Manages remote backends and remote state storage for Terraform configurations.

terraform push = Pushes Terraform modules to a remote module registry.

Devops Interviews Question And Answer

+
What is DevOps, and why is itimportant
+
Ans: DevOps is a set of practices that bridges the gap betweendevelopment and operations teams by automating and integrating processes to improvecollaboration, speed up software delivery, and maintain product reliability. It emphasizescontinuous integration, continuous deployment (CI/CD), and monitoring, ensuring fasterdevelopment, better quality control, and efficient infrastructure management. We need DevOpsto shorten development cycles, improve release efficiency, and foster a culture ofcollaboration across the software delivery lifecycle.
Can you explain the differences between Agile and DevOpsAns:
+
Feature Agile DevOps Focus Software development and iterative releases Collaboration between dev & ops for smoothdeployment Scope Development only Development, deployment, and operations Automation Some automation in testing Heavy automation in CI/CD, infra, andmonitoring FeedbackLoop End-user & stakeholder feedback Continuous monitoring & real-timefeedback
What are the key principles ofDevOps
+
Ans: Key Principles of DevOps Automation: Automate processes like testing, integration, anddeployment to speed up delivery and reduce errors. Collaboration: Encourage close collaboration between development,QA, and operations teams. Continuous Integration/Continuous Deployment(CI/CD): Ensure code changes are automatically testedand deployed to production environments. Monitoring and Feedback: Continuously monitor applications inproduction to detect is sues early and provide quick feedback to developers. Infrastructure as Code (IaC):Manage infrastructure using versioned code to ensure consis tency acrossenvironments. Culture of Improvement: Foster a culture of continuous learningand improvement through frequent retrospectives and experimentation.
How do Continuous Integration (CI) and Continuous Deployment (CD) worktogether in a DevOps environment
+
Ans: Continuous Integration (CI): CI involves integrating codechanges into a shared repository several times a day. Each integration is verified throughautomated tests and builds to ensure that the new changes don’t break the exis tingsystem. Goal: Detect errors as early as possible by running tests andbuilds frequently. Continuous Deployment (CD): CD extends CI by automaticallydeploying the integrated and tested code to production. The deployment process is fullyautomated, ensuring that any change passing the test suite is released to end users. Goal: Deliver updates and features to production quickly and withminimal manual intervention. Together, CI ensures code stability by frequent integration and testing, while CDensures that code reaches production smoothly and reliably.
What challenges did you face in implementing DevOps in your previousprojects
+
Some challenges I’ve faced in implementing DevOps in previous projectsinclude: Cultural Resis tance: Development and operations teams often workin silos, and moving to a DevOps model requires a culture of collaboration that can faceresis tance. ToolIntegration: Finding the right tools and integrating themsmoothly into the CI/CD pipeline can be challenging, especially when there are legacysystems involved. Skill Gaps: Teams often lack experience in using DevOps tools likeJenkins, Docker, or Kubernetes, which can slow down implementation. Infrastructure Complexity: Managing infrastructure using IaC (likeTerraform) requires a solid understanding of infrastructure management, which can bedifficult for development-focused teams. Security Concerns: Incorporating security checks into the CI/CDpipeline (DevSecOps) can add complexity, and ensuring compliance with security policies is achallenge, especially when frequent deployments are involved Version Control(Git, Github) Git
What is Git
+
Git is a version controlsystem used to track changes in code and collaborate withteams.
How do you clone a repository
+
git clone
What is the difference between Git fetch and Git pull
+
Git fetch: Downloadschanges but does not merge them. Git pull: Downloadsand merges changes into the working branch.
What are the benefits of using version controlsystems like GitAns: Collaboration:Multiple team members can work on the same project without overwriting eachother's changes.
+
Tracking Changes: Every modification is tracked, allowing you tosee who made changes, when, and why. Branching and Merging: Git allows developers to create branches towork on features or fixes independently and merge them back into the main branch when ready. Backup: The code is saved on a remote repository (e.g., GitHub),providing a backup if local copies are lost. Version His tory: You can revert back to any previous version ofthe project in case of is sues, enabling quick rollbacks. Code Review: Git enables code reviews through pull requests beforechanges are merged into the main codebase.
How do you resolve conflicts in Git
+
Ans: Conflicts occur when multiple changes are made to the samepart of a file. To resolve: Identify the Conflict: Git will indicate files with conflicts whenyou try to merge or rebase. Open the conflicting file to see the conflicting changes. Edit the File: Git marks the conflicts with <<<<<<<,=======, and>>>>>>> markers. These indicate the conflicting changes.Choose or combine the desired changes. Mark as Resolved: Once you have resolved the conflict, run git add to mark the conflict as resolved. Continue the Operation: Complete the process by running git commit(for merge conflicts) or git rebase --continue (for rebase conflicts). Push the Changes: Once everything is resolved, push the changes tothe repository.
What is a rebase, and when would you use it insteadof mergingAns: Rebase : Rebase moves or"replays" your changes on top of another branch's changes.Instead of merging two branches, rebasing applies commits from one branch ontothe tip of another, creating a linear his tory. When to UseRebase:
+
When you want a clean, linear his tory without merge commits. When working on afeature branch, and you want to incorporate the latest changes from the main branch beforecompleting your work. Rebasevs.Merge: Merge combines his tories and creates a new commit to merge them.This keeps the branching his tory intact but may result in a more complex his tory with multiplemerge commits. Rebase rewrites his tory to appear as if the feature branch wasdeveloped directly from the tip of the main branch.
Can you explain Git branching strategies (e.g., Git Flow, Trunk BasedDevelopment)
+
Ans: In this strategy, you have several long-lived branches (e.g.,main for production, develop for ongoing development, and feature branches for newfeatures). Release branches are created from develop and eventually merged into main. Bug fixes are often done in hotfix branches created from main and merged back intoboth develop and main. Trunk-Based Development : Developers commit small, frequent changes directly to a central branch (the"trunk" or main). Feature branches are short-lived, and large feature development is broken down intosmaller, incremental changes to minimize the ris k of conflicts. This method often works wellin CI/CD environments where continuous deployment is key. Other Strategies : GitHub Flow: Similar to trunk-based development but emphasizes theuse of short-lived branches and pull requests. Feature Branching: Each feature is developed in its own branch,merged into develop or main when ready.
How do you create and switch branches in Git
+
Create a branch: git branch feature-branch Switch to a branch: git checkout feature-branch
How do you merge a branch in Git
+
git checkout main git merge feature-branch
How do you resolve merge conflicts in Git
+
Git will show conflicts in the affected files. Edit the files, resolve conflicts,then: git add . git commit -m "Resolved conflicts"
How do you push changes to a remote repository
+
git push origin branch_name
How do you undo the last commit in Git
+
Soft reset: git reset --soft HEAD~1 (Keeps changes) Hard reset: git reset --hard HEAD~1 (Dis cardschanges) Explain Git lifecycle from cloning a repo to pushing code. git clone → Download repository git checkout -b feature-branch → Create a new branch git add . → Add changes to staging git commit -m "message" → Save changes git push origin feature-branch → Upload changes toGitHub
What is Git architecture
+
Git uses a dis tributed version controlsystem, meaning: Working Directory→ Where you make changes Staging Area→Holds changes before commit Local Repository→ Stores all versions of files Remote Repository→ Hosted on GitHub/GitLab GitHub
How do you integrate GitHub with CI/CDtools
+
Ans: Webhooks: GitHub can send webhooks to CI/CD tools (likeJenkins, GitLab CI, or GitHub Actions) when specific events happen (e.g., a commit or pullrequest). GitHub Actions: GitHub has built-in CI/CD capabilities with GitHubActions, which allows you to automate tests, builds, and deployments on push or pullrequests. Third-Party Tools: Other CI/CD tools (e.g., Jenkins, GitLab CI)can integrate with GitHub using: Access tokens: You can generate personal access tokens in GitHubto authenticate CI tools for repository access. GitHub Apps: Many CI tools provide GitHub Apps for easyintegration, allowing access to repositories, workflows, and pull requests. Docker: You can use Docker images in your CI/CD pipelines bypulling them from Docker Hub to create consis tent build environments. Pull Requests and CI: CI tools often run automated tests when apull request is opened to ensure that the proposed changes pass tests before merging.
What are artifacts in GitLab CI
+
Artifacts are files generated by a GitLab CI/CD job that can be preserved andshared between jobs. Example: Compiled binaries, test reports, logs. Defined in .gitlab-ci.yml using artifacts: keyword. CI/CD Pipeline(Jenkins, Github Actions, Argocd, Gitlab) General Q&A
How would you design a CI/CD pipeline for a projectAns:Designing a CI/CD pipeline involves thefollowing steps:
+
Code Commit: Developers push code to a version controlsystem(like GitHub or GitLab). Build: The pipeline starts with building the code using tools likeMaven (for Java), npm (for Node.js), or pip (for Python). The build ensures that the codecompiles without is sues. Testing: Automated tests run next, including unit tests,integration tests, and sometimes end-to-end tests. Tools like JUnit (Java), PyTest (Python),and Jest (JavaScript) are often used. Static Code Analysis : Tools like SonarQube or ESLint are used toanalyze the code for potential is sues, security vulnerabilities, or code quality concerns. Package & Artifact Creation: If the build is successful, theapplication is packaged into an artifact, such as a JAR/WAR file, Docker image, or a zippackage. Artifact Storage: Artifacts are stored in repositories like Nexus,Artifactory, or Docker Hub for future deployment. Deployment to Staging/TestingEnvironment: The application is deployed to a staging environment for further testing, including functional, performance, orsecurity tests. Approval Gates: Before deploying to production, manual orautomated approval gates are often put in place to ensure no faulty code is deployed.Deploy to Production: After approval, the pipeline deploys the artifact tothe production environment. Monitoring: Post-deployment monitoring using tools like Grafanaand Prometheus ensures that the application is stable.
What tools have you used for CI/CD, and why did you choose them (e.g.,Jenkins, GitLab CI, CircleCI)
+
Ans: Jenkins: Jenkins is highly customizable with a vast range ofplugins and support for almost any CI/CD task. I use Jenkins because of its flexibility,scalability, and ease of integration with different technologies. GitHubActions: I use GitHub Actions for small projects or where deep GitHubintegration is required. It's simple to set up and great for automating workflowsdirectly within GitHub. GitLab CI: GitLab CI is chosen for projects that are hosted onGitLab due to its seamless integration, allowing developers to use GitLab’s built-inCI features with less setup effort. ArgoCD: This toolis essential for continuous delivery inKubernetes environments due to its GitOps-based approach. Docker: Docker simplifies packaging applications into containers,ensuring consis tent environments across development, testing, and production. Terraform: Terraform automates infrastructure provis ioning, makingit an integral part of deployment pipelines for infrastructure as code (IaC).
Can you explain the different stages of a CI/CDpipelineAns: Source/Code Stage : Developerscommit code to a version controlrepository like GitHub or GitLab.
+
Build Stage: The pipeline compiles the source code and packages itinto an executable format. Test Stage: Automated tests are executed, including unit,integration, and performance tests, ensuring code functionality and quality.Artifact Stage: The build is transformed into a deployable artifact (like aDocker image) and stored in a repository. Deployment Stage: The artifact is deployed to a stagingenvironment, followed by production after approval. Post-Deployment: Continuous monitoring is performed to ensure thesystem’s stability after deployment, with tools like Grafana or Prometheus.
What are artifacts, and how do you manage them in a pipeline
+
Ans: Artifacts are the files or build outputs that are createdafter the code is built and tested, such as: JAR/WAR files (for Java applications) Docker images ZIP packages Binary files Artifact Management : Storage: Artifacts are stored in artifact repositories like Nexus,Artifactory, or Docker Hub (for Docker images). Versioning: Artifacts are versioned and tagged based on the coderelease or build number to ensure traceability and rollback capabilities. Retention Policies: Implementretention policies to manage storage, removing old artifacts after a certain period.
How do you handle rollbacks in the case of a faileddeploymentAns: Handling rollbacks dependson the deployment strategy used:
+
Canary or Blue-Green Deployment: These strategies allow you toswitch traffic between versions without downtime. If the new version fails, traffic can beredirected back to the old version. Versioned Artifacts: Since artifacts are versioned, rollbacks canbe performed by redeploying the last known good version from the artifact repository. Automated Rollback Triggers: Use automated health checks in theproduction environment. If something fails post-deployment, the system can automaticallyrollback the deployment. Infrastructure as Code: For infrastructure failures, tools likeTerraform allow reverting to previous infrastructure states, making rollback simpler andsafer. Jenkins
What is JenkinsWhy is it used
+
Answer:Jenkins is anopen-source automation server that helps in automating the parts of softwaredevelopment related to building, testing, and deploying. It is primarilyused for continuous integration (CI) and continuous delivery (CD), enablingdevelopers to detect and fix bugs early in the development lifecycle,thereby improving software quality and reducing the time to deliver.
How does Jenkins achieve Continuous Integration
+
Answer:Jenkins integrateswith version controlsystems (like Git) and can automatically build and testthe code whenever changes are committed. It triggers builds automatically,runs unit tests, static analysis , and deploys the code to the server if everything is successful. Jenkins can be configured to sendnotifications to the team about the status of the build.
What is a Jenkins pipeline
+
Answer:A Jenkins pipelineis a suite of plugins that supports implementing and integrating continuousdelivery pipelines into Jenkins. It provides a set of tools for defining complex build workflows as code, making iteasier to automate the build, test, and deployment processes.
What are the two types of Jenkins pipelines
+
Answer: Declarative Pipeline:A newer, simpler syntax, defined within a pipelineblock. Scripted Pipeline:Offers more flexibility and is written in Groovy-likesyntax, but is more complex.
What is the difference between a freestyle project and a pipeline projectin Jenkins
+
Answer: Freestyle Project:This is the basic form of a Jenkins project, where you candefine simple jobs, such as running a shell script or executing abuild step. Pipeline Project:This allows you to define complex job sequences,orchestrating multiple builds, tests, and deployments acrossdifferent environments.
How do you configure a Jenkins job to be triggered periodically
+
Answer:You can configureperiodic job triggers in Jenkins by enabling the "Buildperiodically" option in the job configuration. You definethe schedule using cron syntax, for example, H/5 * * * * to run the jobevery 5 minutes.
What are the different ways to trigger a build in Jenkins
+
Answer: Manual trigger by clicking "BuildNow". Triggering through source code changes (e.g., Git hooks).3. Using a cron schedule for periodic builds. Triggering through webhooks or API calls. Triggering builds after other builds are completed.
What are Jenkins agentsHow do they work
+
Answer:Jenkins agents(also called nodes or slaves) are machines that are configured to executetasks/jobs on behalf of the Jenkins master. The master delegates jobs to theagents, which can be on different platforms or environments. Agents help in dis tributing the load of executing tasks acrossmultiple machines.
How can you integrate Jenkins with other tools like Git, Maven, or Docker
+
Answer:Jenkins supportsintegration with other tools using plugins. For instance: Git:You caninstall the Git plugin to pull code from a repository. o Maven: Maven plugin is used to build Java projects. Docker:You caninstall the Docker plugin to build and deploy Docker containers.
What is Blue Ocean in Jenkins
+
Answer:Blue Ocean is amodern, user-friendly interface for Jenkins that provides a simplified viewof continuous delivery pipelines. It offers better vis ualization of theentire pipeline and makes it easier to troubleshoot failures with a moreintuitive UI compared to the classic Jenkins interface.
What are the steps to secure Jenkins
+
Answer: Enable security with Matrix-based securityor Role-based access control. Ensure Jenkins is running behind a secure network anduses HTTPS . Use SSH keys for secure communication. Install and configure necessary security plugins, like OWASPDependency-Check. Keep Jenkins and its plugins up to date to avoid vulnerabilities.
What is a Jenkinsfile
+
Answer:A Jenkinsfile is atext file that contains the definition of a Jenkins pipeline. It can beversioned alongside your code and is used to automate the build, test, anddeployment processes. There are two types of Jenkinsfiles: declarative andscripted.
How does Jenkins handle parallel execution in pipelines
+
Answer:Jenkins supportsparallel execution of pipeline stages using the parallel directive. This allows you to execute multiple tasks (e.g., building and testing ondifferent environments) simultaneously, thereby reducing the overall buildtime. groovy stage('Parallel Execution') { parallel { stage('Unit Tests') { steps { echo 'Running unit tests...' } } stage('Integration Tests') { steps { echo 'Running integration tests...' } } } }
How can you monitor Jenkins logs and troubleshoot is sues
+
Answer:Jenkins logs canbe monitored through the Jenkins UI in the "ManageJenkins" section under "SystemLog". Additionally, job specific logs can be accessed ineach job’s build his tory. For more detailed logs, you can check theJenkins server log files located in the system where Jenkins is hosted.
How can you handle failed builds in Jenkins
+
Answer: Automatic retries: ConfigureJenkins to retry the build a specified number of times after a failure. Post-build actions: Set upnotifications or trigger other jobs in case of failure. Pipeline steps: Use conditionallogic in pipelines to handle failures (e.g., try-catch blocks).
How do you write parallel jobs in a Jenkins pipeline
+
Use parallel directive in Jenkinsfile: groovy stage('Parallel Execution') { parallel { stage('Job 1') { steps { echo 'Executing Job 1' } } stage('Job 2') { steps { echo 'Executing Job 2' } } } } GitHub Actions
What are GitHub Actions and how do they work
+
Answer: GitHub Actions is a CI/CD toolthat allows you to automate tasks within your repository. Itworks by defining workflows using YAML files in the .github/workflowsdirectory. Workflows can trigger on events like push, pull_request, or evenscheduled times, and they define a series of jobs that run within a virtualenvironment.
How do you create a GitHub Actions workflow
+
Answer: To create aworkflow, you add a YAML file under .github/workflows/. In this file, you define: on: The event that triggers the workflow (e.g., push,pull_request). jobs: The set of tasks that should be executed. steps: Actions within each job, such as checking out therepository or running scripts.
What are runners in GitHub Actions
+
Answer: Runners areservers that execute the workflows. GitHub offers hosted runners with commonpre-installed tools (Linux, macOS, Windows), or you can use self-hostedrunners if you need specific environments.
How do you securely store secrets in GitHub Actions
+
Answer: You can storesecrets like API keys or credentials using GitHub’s Secrets feature.These secrets are encrypted and can be accessed in workflows via ${{secrets.MY_SECRET }}. ArgoCD
Q1: What is Argo CD, and how does it work in a DevOpspipeline
+
A1: Argo CD is a GitOps continuous delivery toolfor Kubernetes.It automates application deployments by syncing the live state with the desired statedefined in Git.
Q2: How does Argo CD implement the GitOps model
+
A2: Argo CD uses Git repositories as the source of truth forapplication configurations. It continuously monitors the repository to ensure the live statematches the desired state.
Q3: What are the key features of Argo CD that make itsuitable for DevOpsA3: Key features includeautomated deployments, multi-cluster management, drift
+
detection, rollback, and integration with CI/CD tools. These make it ideal forKubernetes environments.
Q4: How does Argo CD handle rollback and recovery
+
A4: Argo CD allows rollback by reverting to a previous commit inGit. This helps recover from failed deployments or configuration drifts quickly.
Q5: Can Argo CD be used in multi-cluster environments
+
A5: Yes, Argo CD supports managing applications across multipleKubernetes clusters, making it suitable for large-scale or multi-cloud environments.
Q6: How does Argo CD integrate with other CI/CD tools
+
A6: Argo CD integrates with tools like Jenkins, GitLab CI, andGitHub Actions. It handles deployment after the CI pipeline builds the application.
Q7: What is drift detection in Argo CD
+
A7: Drift detection identifies when the live state of anapplication differs from the desired state in Git. Argo CD can sync the application to thecorrect state.
Q8: What are the benefits of using Argo CD in a DevOpsenvironmentA8: Benefits include faster deployments,improved collaboration, reliable rollbacks, and audit trails for compliance. It alsosupports multi-cluster management.
+
Q9: How do you secure Argo CD in a DevOps environment
+
A9: Argo CD can be secured with authentication (OAuth2, SSO),RBAC, TLS encryption, and audit logging for compliance and security.
Q10: What is the role of the Argo CD CLI in DevOps
+
A10: The Argo CD CLI allows interaction with the API server tomanage applications, sync deployments, and monitor health. It aids in automation andintegration.
Q11: How do you manage secrets in Argo CD
+
A11: Argo CD integrates with Kubernetes Secrets, HashiCorp Vault,or external secret management tools to securely manage sensitive data.
Q12: What is the Argo CD ApplicationSet
+
A12: The ApplicationSet is a feature in Argo CD that allowsdynamic creation of applications based on a template and parameters, useful for managingmultiple similar applications.
Q13: How does Argo CD handle application health monitoring
+
A13: Argo CD monitors application health by checking the status ofKubernetes resources. It provides real-time updates and can trigger alerts for unhealthyapplications.
Q14: Can Argo CD be used for blue-green or canary deployments
+
A14: Yes, Argo CD supports blue-green and canary deployments bymanaging different versions of applications and controlling traffic routing to minimizedowntime.
Q15: How does Argo CD handle application synchronization
+
A15: Argo CD automatically syncs applications when a change is detected in the Git repository. It can also be manually triggered to sync the desired state.
Q16: What is the difference between Argo CD and Helm
+
A16: Argo CD is a GitOps toolfor continuous delivery, while Helmis a package manager for Kubernetes applications. Argo CD can use Helm charts fordeployment.
Q17: How do you manage Argo CD’s access control
+
A17: Argo CD uses RBAC (Role-Based Access Control) to manage userpermis sions, ensuring only authorized users can perform specific actions on applications.
Q18: How does Argo CD handle multi-tenancy
+
A18: Argo CD supports multi-tenancy by using RBAC, allowingmultiple teams to manage their own applications within a shared Kubernetes cluster.
Q19: What are the different sync options in Argo CD
+
A19: Argo CD offers manual, automatic, and semi-automatic syncoptions. Manual sync requires user intervention, while automatic sync happens when a changeis detected in the Git repository.
Q20: What is the difference between "App of Apps" and"ApplicationSet" in Argo CD
+
A20: "App of Apps" is a pattern where one applicationmanages other applications, while "ApplicationSet" dynamically createsapplications based on a template and parameters. GitLab
What is GitLabAnswer:
+
GitLab is a web-based DevOps lifecycle toolthat provides a Git repository manager,allowing teams to collaborate on code. It offers features such as version control, CI/CD(Continuous Integration and Continuous Deployment), is sue tracking, and monitoring. GitLabintegrates various stages of the software development lifecycle into a single application,enabling teams to streamline their workflows.
How does GitLab CI/CD workAnswer:
+
GitLab CI/CD automates the software development process. You define your CI/CDpipeline in a .gitlab-ci.yml file located in the root of your repository. This filespecifies the stages, jobs, and scripts to run. GitLab Runner, an application that executesthe CI/CD jobs, picks up the configuration and runs the jobs on specified runners, whetherthey are shared, group, or specific runners.
What is a GitLab RunnerAnswer:
+
A GitLab Runner is an application that processes CI/CD jobs in GitLab. It can beinstalled on various platforms and can run jobs in different environments (e.g., Docker, shell). Runners can be configured to be shared across multipleprojects or dedicated to a specific project. They execute the scripts defined in the.gitlab-ci.yml file.
What is the difference between GitLab and GitHubAnswer:
+
While both GitLab and GitHub are Git repository managers, they have differentfocuses and features. GitLab offers integrated CI/CD, is sue tracking, and project managementtools all in one platform, making it suitable for DevOps workflows. GitHub is more focusedon social coding and open-source projects, although it has added some CI/CD features withGitHub Actions. GitLab also provides self-hosting options, while GitHub primarily operates as a cloud service.
Can you explain the GitLab branching strategyAnswer:
+
A common GitLab branching strategy is the Git Flow, which involves having separatebranches for different purposes: Master/Main:The stable version ofthe code. Develop:The integration branchfor features. Feature branches:Created from thedevelop branch for specific features. ∙ Release branches:Used forpreparing a new production release. Hotfix branches:Usedfor urgent fixes on the master branch. This strategy helps manage developmentworkflows and releases effectively.
What is the purpose of a .gitlab-ci.yml fileAnswer:
+
The .gitlab-ci.yml file defines the CI/CD pipeline configuration for a GitLabproject. It specifies the stages, jobs, scripts, and conditions under which the jobs shouldrun. This file is essential for automating the build, test, and deployment processes inGitLab CI/CD.
How do you handle merge conflicts in GitLabAnswer:
+
Merge conflicts occur when two branches have changes that cannot be automaticallyreconciled. To resolve conflicts in GitLab, you can: 1. Merge the conflicting branch into your current branch locally. 2. Use Gitcommands (git merge or git rebase) to resolve conflicts in your code editor. 3. Commit the resolved changes. 4. Push the changes back to the repository. Alternatively, you can use the GitLabweb interface to resolve conflicts in the merge request.
What are GitLab CI/CD pipelinesAnswer:
+
GitLab CI/CD pipelines are a set of automated processes defined in the .gitlabci.yml file that facilitate the build, test, and deployment of code. A pipeline consis ts ofone or more stages, where each stage can contain multiple jobs. Jobs in a stage runconcurrently, while stages run sequentially. Pipelines help ensure consis tent delivery ofcode and automate repetitive tasks.
What is the purpose of GitLab is suesAnswer:
+
GitLab is sues provide a way to track tasks, bugs, and feature requests within aproject. They help teams manage their work by allowing them to create, assign, comment on,and close is sues. Each is sue can include labels, milestones, and due dates, making it easierto prioritize and organize tasks. Explain the concept of tags in GitLab. Answer: Tags in GitLab are references to specific points in a repository’s his tory,typically used to mark release versions or important milestones. Tags are immutable andserve as a snapshot of the code at a particular commit. They can be annotated (withadditional information) or lightweight. Tags are useful for managing releases anddeployments. Containerization (Docker, Kubernetes) Docker
What is Docker daemon
+
Docker daemon is the background service that runs containers. Explain Docker architecture and lifecycle. Docker includes: Docker Client→Runs Docker commands Docker Daemon→Manages containers Docker Regis try→ Stores Docker images Docker Containers→ Runs applications inside is olated environments Write five Docker commands and explain them. docker pull → Download a Dockerimage docker run → Start a container docker ps → Lis t running containers docker stop → Stop a container docker rm → Remove a container Write a Jenkins pipeline that builds and pushes a Docker image. groovy pipeline { agent any stages { stage('Build') { steps { sh 'docker build -t myapp:latest .' } } stage('Push') { steps { withDockerRegis try([credentialsId: 'dockerhub']) { sh 'docker pushmyapp:latest' } } } } } Round 3: Technical Interview – 2 Write a simple Dockerfile to create a Dockerimage. Dockerfile FROM ubuntu:latest RUN apt update && apt install -y nginx CMD ["nginx","-g", "daemon off;"]
What is the difference between S3 buckets and EBS volumes
+
S3:Object storagefor files, backups EBS:Block storagefor persis tent dis ks
Amazon AMI vs Snapshot—what’s the difference
+
AMIis a bootable image with OSand software Snapshotis a backupof a dis k or EBS volume Explain remote state locking in Terraform. Terraform locks the state file using DynamoDB to prevent multiple users frommodifying it at the same time.
What is Docker, and how does it differ from a virtual machine
+
Ans: Docker: A containerization platform that packagesapplications and their dependencies in containers, enabling consis tent environments acrossdevelopment and production. Containers share the host OS kernel but have is olated processes,filesystems, and resources. Virtual Machines (VMs): Full-fledged systems that emulate hardwareand run separate OS instances. VMs run on a hypervis or, which sits on the host machine. Key Differences : Performance: Docker containers are lightweight and start fasterbecause they share the host OS, whereas VMs run an entire OS and have higher overhead. is olation: VMs offer stronger is olation as they emulate hardware,while Docker containers is olate at the process level using the host OS kernel. Resource Efficiency: Docker uses less CPU and memory since itdoesn’t require a full OS in each container, whereas VMs consume more resources due torunning a separate OS.
How do you create and manage Docker images and containersAns: Tocreate Docker images, you typically:
+
Write a Dockerfile: This file contains instructions for buildingan image, such as specifying the base image, copying application code, installingdependencies, and setting the entry point. Dockerfile # Example Dockerfile FROM node:14 WORKDIR /app COPY . . RUN npm install CMD ["npm", "start"] Build the image: Using the Docker CLI, you can build an image fromthe Dockerfile. docker build -t my-app:1.0 . Push the image to a regis try like Docker Hub for future use:docker push my-app:1.0 To manage Docker containers : Run the container: You canrun a container from an image. docker run -d --name my-running-app -p 8080:8080 my-app:1.0 Stop, start, and remove containers : docker stop my-running-app docker start my-running-app docker rm my-running-app Use tools like Docker Compose for multi-container applications todefine and run multiple containers together.
How do you optimize Docker images for production
+
Ans: Use smaller base images:Start from lightweight images such as alpine, which reduces the image size and minimizessecurity ris ks. Dockerfile FROM node:14-alpine Leverage multi-stage builds:This allows you to keep the build dependencies out of the final production image,reducing size. Dockerfile # First stage: build the app FROM node:14 as build WORKDIR /app COPY package*.json ./ RUN npm install COPY . . RUN npm run build # Second stage: use only the compiled app FROM nginx:alpine COPY --from=build /app/build /usr/share/nginx/html Minimize layers: Each line in the Dockerfile adds a layer to theimage. Combine commands where possible. Dockerfile RUN apt-get update && apt-get install -y \ curl git && rm -rf/var/lib/apt/lis ts/* Use .dockerignore: This file ensures that unnecessary files like.git or local files are excluded from the build context. Optimize caching: Reorder commands in your Dockerfile to takeadvantage of Docker’s build cache. Kubernetes Kubernetes General Q&A
What is Kubernetes, and how does it help incontainer orchestration
+
Ans: Kubernetes (K8s) is an open-source container orchestrationplatform that automates the deployment, scaling, and management of containerizedapplications. It helps with: Scaling: Kubernetes can automatically scale applications up ordown based on traffic or resource utilization. Load Balancing: Dis tributes traffic across multiple containers toensure high availability. Self-healing: Restarts failed containers, replaces containers, andkills containers that don’t respond to health checks. Automated Rollouts and Rollbacks: Manages updates to your application with zero downtime and rolls back ifthere are failures. Resource Management: It handles the allocation of CPU, memory, andstorage resources across containers. Explain how you’ve set up a Kubernetes cluster. Setting up a Kubernetes cluster generally involves thesesteps: Install Kubernetes tools: Use tools like kubectl (Kubernetes CLI)and kubeadm for setting up the cluster. Alternatively, you can use cloud providers like AWSEKS or managed clusters like GKE or AKS. Set up nodes: Initialize the controlplane node (master node)using kubeadm init and join worker nodes using kubeadm join. sudo kubeadm init Install a networking plugin:Kubernetes requires a network overlay to allow communication between Pods. I useCalico or Weave for setting up networking. kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml Deploy applications: Once the cluster is up, you deploycontainerized applications by creating Kubernetes objects like Deployments, Services, andConfigMaps. kubectl apply -f deployment.yaml Set up monitoring: Tools likePrometheus and Grafana can be installed for clustermonitoring and alerting.
What are Kubernetes services, and how do they differ from Pods
+
Ans: Kubernetes Pods: Pods are the smallest unit in Kubernetes andrepresent one or more containers that share the same network and storage. A Pod runs asingle instance of an application and is ephemeral in nature. Kubernetes Services: Services provide a stable IP address or DNSname for a set of Pods. Pods are dynamic and can come and go, but a Service ensures that theapplication remains accessible by routing traffic to healthy Pods. Key differences: Pods are ephemeral and can be replaced, but Servicesprovide persis tent access to a group of Pods. Services enable load balancing, internal and external networkcommunication, whereas Pods are more for container runtime. Example of a Service YAML: apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: MyApp ports: protocol: TCP port: 80 targetPort: 8080 type: LoadBalancer This creates a load-balanced service that routes traffic to Pods labeled with app:MyApp on port 80 and directs it to the containers' port 8080.
What is Kubernetes and why is it used
+
Answer: Kubernetes is an open-source container orchestrationplatform that automates the deployment, scaling, and management of containerizedapplications. It's used to efficiently run and manage dis tributed applications acrossclusters of servers.
What are Pods in Kubernetes
+
Answer: A Pod is the smallest and simplest Kubernetes object. Itrepresents a single instance of a running process in the cluster and can contain one or moretightly coupled containers that share the same network namespace. Explain the difference between a Deployment and a StatefulSet inKubernetes. Answer: Deployment: Used for stateless applications and manages Pods,ensuring the correct number are running at all times. It can easily scale up or down andrecreate Pods if needed. StatefulSet: Used for stateful applications. It maintains uniquenetwork identities and persis tent storage for each Pod and is useful for databases andservices that require stable storage and ordered, predictable deployment and scaling.
How do you expose a Kubernetes application to external traffico
+
Answer: There are several ways to expose a Kubernetes application: Service of type LoadBalancer:Creates a load balancer for your application, typically in cloud environments. Ingress: Provides HTTP and HTTPS routing to services within thecluster and supports features like SSL termination. NodePort: Exposes the application on a static port on each node inthe cluster.
How does Kubernetes handle storage
+
Answer: Kubernetes provides several storage options, such as: ▪Persis tent Volumes (PV): A resource in the cluster that provides durablestorage. Persis tent Volume Claims (PVC): A request for storage by a user ora Pod. StorageClass: Defines different types of storage (e.g., SSD, HDD),and allows for dynamic provis ioning of PVs based on the storage class
What are the different types of Kubernetes volumes
+
emptyDir, hostPath, persis tentVolumeClaim, configMap, secret,NFS, CSI.
If a pod is in a crash loop, what might be the reasons, and how can yourecover it
+
Check logs: kubectl logs . Describe pod: kubectl describe pod . Common is sues: Wrong image, mis sing config, insufficient memory.
What is the difference between StatefulSet and DaemonSet
+
StatefulSet: Used forstateful applications (e.g., databases). DaemonSet: Runs a pod onevery node (e.g., monitoring agents).
What is a sidecar container in Kubernetes, and what are its use cases
+
A helper container running alongside the maincontainer. Example: Log forwarding, security monitoring.
If pods fail to start during a rolling update, what strategy would you useto identify the is sue and rollback
+
Check kubectl get pods, kubectl describe pod. Rollback: kubectl rollout undo deployment
What is Blue-Green Deployment
+
Blue-Green Deployment involves two environments: Blueis the livesystem Greenis the newversion Once Green is tested, traffic is switched to it.
What is Canary Deployment
+
In Canary Deployment, the new version is released to a small percentage of usersfirst. If stable, it is rolled out to everyone.
What is a Rolling Update
+
A Rolling Update gradually replaces old instances with new ones without downtime.
What is a Feature Flag
+
Feature Flags allow enabling or dis abling features without redeploying code.
What is a Kubernetes Operator
+
A Kubernetes Operator is a toolthat automates the management of applications onKubernetes. It monitors the application and takes automatic actions like scaling, updating,and restarting based on the application’s needs.
What is a Custom Resource Definition (CRD)
+
Kubernetes has built-in objects like Pods and Services. CRDs let you create customKubernetes objects for your specific applications.
What is a Custom Controller
+
A controller is a program that watches Kubernetes objects and makes changes ifneeded. A custom controller works with CRDs to manage user-defined resources.
What are API groups in Kubernetes
+
API groups in Kubernetes help organize different types of resources. Example: apps/v1 → Used for Deployments and StatefulSets networking.k8s.io/v1 → Used for Ingress and NetworkPolicies
What is etcd
+
etcd is a key-value database that stores all Kubernetes cluster data includingPods, Nodes, and Configs. Kubernetes Architecture
What are the main components of KubernetesarchitectureAnswer: Kubernetes architectureconsis ts of two major components:
+
ControlPlane: It manages the overall cluster, includingscheduling, maintaining the desired state, and orchestrating workloads. Key components are: API Server o etcd Scheduler Controller Manager Worker Nodes: These are the machines (physical or virtual) thatrun the containerized applications. Key components are: Kubelet Kube-proxy Container runtime
What is the role of the Kubernetes API Server
+
Answer:The Kube APis erver is the central component of the Kubernetes ControlPlane. It: Acts as the front-end to the controlplane, exposing the Kubernetes API. ∙Processes REST requests (kubectl commands or other API requests) and updates thecluster’s state (e.g., creating or scaling a deployment). ∙ Manages communicationbetween internal controlplane components and external users.
What is etcd and why is it important in Kubernetes
+
Answer: etcd is a dis tributed key-value store used by Kubernetesto store all the data related to the cluster’s state. This includes information aboutpods, secrets, config maps, services, and more. It is important because: It acts as the source of truth for the cluster’sconfiguration. ∙ It ensures data consis tency and high availability across the controlplanenodes.
What does the Kubernetes Scheduler do
+
Answer: The Scheduler is responsible forassigning pods to nodes. It considers resource availability (CPU, memory), node conditions,affinity/anti-affinity rules, and other constraints when deciding where a pod should beplaced. The Scheduler ensures that pods are dis tributed across nodes efficiently.
What is a Kubelet, and what role does it play
+
Answer: The Kubelet is an agent running on everyworker node in the Kubernetes cluster. Its role is to: Ensure that the containers described in the pod specs are running correctly on theworker node. Communicate with the controlplane to receive instructions and report back thestatus of the node and the running pods. It interacts with the container runtime (like Docker or containerd) to managecontainer lifecycle.
What is a pod in Kubernetes
+
Answer: A pod is the smallest and simplestKubernetes object. It represents a group of one or more containers that share storage andnetwork resources and have the same context. Pods are usually created to run a singleinstance of an application, though they can contain multiple tightly coupled containers.
How does Kubernetes networking work
+
Answer: Kubernetes uses a flat network modelwhere every pod gets its own unique IP address. Key features include: Pods can communicate with each other across nodes without NAT. ∙ Kubernetes relieson CNI (Container Network Interface) plugins like Calico, Flannel, or Weaveto implement network connectivity. ∙ Kube-proxy on each node managesservice networking and ensures traffic is properly routed to the right pod.
What is the role of the Controller Manager
+
Answer: The Controller Manager runs variouscontrollers that monitor the cluster’s state and ensure the actual state matches thedesired state. Some common controllers are: Node Controller: Watches thehealth and status of nodes. Replication Controller:Ensures the specified number of pod replicas are running. Job Controller: Manages thecompletion of jobs.
What is the role of the Kube-proxy
+
Answer:TheKube-proxy is responsible for network connectivity within Kubernetes.It: Maintains network rules on worker nodes. Routes traffic from services to theappropriate pods, enabling communication between different pods across nodes. Uses IP tables or IPVS to ensure efficient routing of requests.
What are Namespaces in Kubernetes
+
Answer: Namespaces in Kubernetes provide a way to divide clusterresources between multiple users or teams. They are used to: Organize objects (pods, services, etc.) in the cluster. Allow separation ofresources for different environments (e.g., dev, test, prod) or teams. Apply resource limitsand access controls at the namespace level.
How does Kubernetes achieve high availabilityAnswer:Kubernetes achieves high availability(HA) through:
+
Multiple ControlPlane Nodes:The controlplane can be replicated across multiple nodes, so if one fails, others takeover. etcd clustering: A highly available and dis tributed etcd clusterensures data consis tency and failover. Pod Replication: Workloads can be replicated across multipleworker nodes, so if one node fails, the service continues running on others.
What is the function of the Cloud Controller Manager
+
Answer: The Cloud Controller Manager is responsible for managing cloud specific controllogic in a Kubernetes cluster running oncloud providers like AWS, GCP, or Azure. It: Manages cloud-related tasks such as node instances, load balancers, and persis tentstorage. Decouples cloud-specific logic from the core Kubernetes components.
What is the significance of a Service in Kubernetes
+
Answer:A Servicein Kubernetes defines a logical set of pods and a policy to access them. Services provide a stable IP address and DNS name for accessing theset of pods even if the pods are dynamically created or destroyed. It can expose theapplication to: Internal services within the cluster (ClusterIP). External clients via load balancers (LoadBalancer service).
How does Kubernetes handle scaling
+
Answer:Kubernetes supportsboth manual and auto-scaling mechanis ms: Manual scaling can be done using kubectl scale command to adjustthe number of replicas of a deployment or service. Horizontal Pod Autoscaler (HPA)automatically scales the number of pods based on CPU/memory utilization orcustom metrics. Vertical Pod Autoscaler (VPA)can adjust the resource requests and limits of pods based on their observedresource consumption. Networking in Kubernetes(Ingress Controller,Calico) K8 Networking General q&a
What is Kubernetes NetworkingAnswer:
+
Kubernetes networking enables communication between different components inside acluster, such as Pods, Services, and external networks. It provides networking policies andmodels to manage how Pods communicate with each other and with external entities.
What are the key networking components in KubernetesAnswer:
+
Pods:The smallest unit inKubernetes that contains one or more containers. Each Pod has its own IPaddress. Services:Exposes a set of Pods asa network service, allowing external or internal communication. Cluster IP:Default Service type,accessible only within the cluster. NodePort:Exposes a Service on astatic port on each node. LoadBalancer:Exposes the Serviceexternally using a cloud provider’s load balancer. Ingress Controller:Managesexternal access to Services using HTTP/HTTPS routes. Network Policies:Define rules forallowing or blocking traffic between Pods.
How does Pod-to-Pod communication work in KubernetesAnswer:
+
Every Pod in a Kubernetes cluster gets a unique IP address. Pods communicatedirectly using these IPs. Kubernetes networking model ensures that all Pods can communicatewith each other without NAT (Network Address Translation).
What is a Service in KubernetesWhy is it neededAnswer:
+
A Service is an abstraction that defines a logical set of Pods anda policy for accessing them. Since Pods are ephemeral and can be replaced, their IPaddresses change frequently. Services provide a stable endpoint for accessing Pods usingDNS.
What are the different types of Kubernetes ServicesAnswer:
+
ClusterIP:Default type;allows internal communication within the cluster. NodePort:Exposes the Service on astatic port on all nodes. LoadBalancer:Integrates withcloud providers to expose Services externally. ExternalName:Maps a Service to anexternal DNS name.
What is Ingress in KubernetesAnswer:
+
Ingress is an API object that manages external HTTP and HTTPS access to Serviceswithin the cluster. It routes traffic based on defined rules, such as host-based orpath-based routing.
How does DNS work in KubernetesAnswer:
+
Kubernetes provides built-in DNS resolution for Services. When a Service is created, it gets a DNS name in the format service-name.namespace.svc.cluster.local, which resolves to the Service's IPaddress.
What is a Network Policy in KubernetesAnswer:
+
A Network Policy is a Kubernetes object that defines rules for controlling inboundand outbound traffic between Pods. It uses labels to enforce traffic rules at the Pod level.
What are some common CNI (Container Network Interface) plugins used inKubernetes
+
Answer: Calico:Provides networking andnetwork policy enforcement. Flannel:A simple overlay networkfor Kubernetes. Cilium:Uses eBPF for security andnetworking. Weave:Implements a mesh networkfor Pods.
How does Kubernetes handle external traffic
+
Answer: External traffic can be managed using: NodePort Services:Exposes aService on a specific port on all cluster nodes. LoadBalancer Services:Uses acloud provider’s load balancer. Ingress Controllers:RoutesHTTP/HTTPS traffic using host-based or path-based rules.
How do you restrict Pod-to-Pod communication in KubernetesAnswer:
+
By applying Network Policies, which define rules for allowed anddenied traffic between Pods.
What is the difference between ClusterIP, NodePort,and LoadBalancer
+
Answer: ServiceType Cluster IP Accessibility Use Case Internal to cluster Default type, used for internal communication. NodeP ort LoadB alancer Exposes service on a node's IP at a static port Integrates with cloud provider's LB External access without a cloud load balancer. Provides external access via cloud-managed load balancer.
What is Kube-proxy and how does it workAnswer:
+
Kube-proxy is a network component that maintains network rules for directing traffic to Services. It manages traffic routing at the IP tables level or usingIPVS.
How do Kubernetes Pods communicate across different nodesAnswer:
+
Kubernetes uses CNI plugins (such as Calico, Flannel, or Weave) tocreate an overlay network that enables Pods to communicate across nodes without requiringNAT.
What happens when you delete a Pod in KubernetesAnswer:
+
When a Pod is deleted, Kubernetes automatically removes its IP address from thenetwork, updates DNS, and reschedules a new Pod if required. Advanced Kubernetes Networking Interview Questions and Answers
What is the role of CNI (Container NetworkInterface) in KubernetesAnswer:
+
CNI is a specification and a set of libraries that enable networking forcontainers. Kubernetes uses CNI plugins to configure network interfaces inside containersand set up rules for inter-Pod communication.
How does Kubernetes handle Service Dis coveryAnswer:
+
Kubernetes provides Service Dis covery in twoways: Environment Variables:Kubernetesinjects environment variables into Pods when a Service is created. DNS-based Service Dis covery:TheKubernetes DNS automatically assigns a domain name to Services(service-name.namespace.svc.cluster.local), allowing Pods to resolve Services usingDNS queries.
What is the difference between an Ingress Controllerand a LoadBalancer
+
Answer: Feature Ingress Controller LoadBalancer Functi onality Manages HTTP/HTTPS routing Provides external access to a Service Protoc ols HTTP, HTTPS Any protocol(TCP, UDP, HTTP, etc.) Cost More cost-effective Cloud provider-dependent, may have highercosts Use Case Used for routing traffic within the cluster Used for exposing Services externally
What is IPVS mode in kube-proxyAnswer:
+
IPVS (IP Virtual Server) is an alternative to iptables inkube-proxy. It provides better performance for high-scaleenvironments because it uses a kernel-space hash table instead ofprocessing packet rules one by one (as in iptables).
How does Calico work in KubernetesAnswer:
+
Calico provides networking and network policy enforcement. It uses BGP(Border Gateway Protocol) to dis tribute routes dynamically and allows Pods tocommunicate efficiently across nodes without an overlay network.
What is the role of an Overlay Network in KubernetesAnswer:
+
An overlay network abstracts the underlying physical network, enablingcommunication between Pods across different nodes by encapsulating packets inside anotherprotocollike VXLAN. Flannel and Weave use overlay networking.
How does Kubernetes handle multi-tenancy in networkingAnswer:
+
Kubernetes achieves multi-tenancy using: Network Policies:Restrictcommunication between different tenant namespaces. Different CNis :Some CNis likeCalico support network is olation per namespace. Multi-network support:Pluginslike Multus allow assigning multiple network interfaces per Pod.
How can you debug networking is sues in KubernetesAnswer:
+
Some common steps to debug networking is sues: Check Pod IPs:kubectl get pods -owide Inspect network policies:kubectlget networkpolicy -A Test connectivity between Pods:kubectl exec -it -- ping Check DNS resolution:kubectl run-it --rm --image=busybox dns-test -- nslookup my-service Inspect kube-proxy logs:kubectllogs -n kube-system
What are Headless Services in KubernetesAnswer:
+
A Headless Service (spec.clusterIP: None) does not allocate a cluster IP and allowsdirect Pod-to-Pod communication by exposing the individual Pod IPs instead of a singleService IP.
What is a Dual-Stack Network in KubernetesAnswer:
+
A dual-stack network allows Kubernetes clusters to support bothIPv4 and IPv6 addresses simultaneously. This helps in migrating workloadsto IPv6 while maintaining backward compatibility.
How does Kubernetes handle External Traffic when using Ingress
+
Answer: When using an Ingress Controller, external traffic is handled by ingress rules that map HTTP/HTTPS requests to specificServices. The Ingress Controller lis tens on ports 80/443 androutes traffic based on hostnames or paths.
What is the purpose of the HostPort and HostNetwork settings inKubernetes
+
Answer: HostPort:Allows a container tobind directly to a port on the Node. It is useful but can lead to portconflicts. HostNetwork:Allows a Pod to usethe Node's network namespace, exposing all its ports. This is used forsystem-level services like DNS and monitoring agents.
How does Service Mesh work in KubernetesAnswer:
+
A Service Mesh (e.g., is tio, Linkerd) provides additional controlover service-to-service communication by handling: Traffic management (routing, retries, loadbalancing) Security (TLS encryption, authentication,authorization) Observability (metrics, logs, tracing) It operates using sidecar proxies injected intoPods to manage network traffic.
How does MetalLB provide Load Balancing in Bare-Metal KubernetesClusters
+
Answer: Since bare-metal clusters do not have a built-in LoadBalancer like cloud providers, MetalLB assigns external IP addresses toKubernetes Services and provides L2 (ARP/NDP) or L3 (BGP) routing toroute traffic to nodes.
How does Kubernetes handle networking in multi-cloud or hybrid cloudenvironments
+
Answer: Cluster Federation:KubernetesFederation allows multi-cluster management across cloud providers. Global Load Balancers:Cloud-basedglobal load balancers (e.g., AWS Global Accelerator) direct traffic betweendifferent Kubernetes clusters. Service Mesh (is tio, Consul):Helps manage communication across multiple clusters in hybrid-cloudsetups. Ingress Controller
What is an Ingress Controller in KubernetesAnswer:
+
An Ingress Controller is a specialized load balancer for Kubernetes clusters thatmanages external access to the services within the cluster. It interprets the Ingressresource, which defines the rules for routing external HTTP/S traffic to the services basedon the requested host and path. Common Ingress Controllers include NGINX, Traefik, andHAProxy.
How does an Ingress Controller differ from a Load BalancerAnswer:
+
An Ingress Controller is specifically designed to handle HTTP/S traffic and routeit to services within a Kubernetes cluster based on defined rules. In contrast, a LoadBalancer is typically used for dis tributing incoming traffic across multiple instances of aservice, and it can handle different types of traffic (not limited to HTTP/S). While Load Balancers can be integrated with Ingress Controllers, Ingress Controllers offer more sophis ticated routing capabilities,such as path-based and host-based routing.
Can you explain how to set up an Ingress Controller in a Kubernetescluster
+
Answer: To set up an Ingress Controller, follow these general steps: Choose an Ingress Controller:Select one (e.g., NGINX or Traefik). Deploy the Ingress Controller: Usea YAML manifest or Helm chart to deploy it in your cluster. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingressnginx/main/deploy/static/provider/cloud/deploy.yaml Create Ingress Resources: Define Ingress resources in YAML files that specify the routingrules. yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress spec: rules: host: example.com http: paths: path: / pathType: Prefix backend: service: name: example-service port: number: 80 Configure DNS: Update your DNSsettings to point to the Ingress Controller's external IP.
What are some common features of an Ingress ControllerAnswer:
+
Common features include: Path-based Routing: Directingtraffic based on the request path. ∙ Host-based Routing: Routingbased on the requested host. ∙ TLS Termination: HandlingHTTPS traffic and managing SSL certificates. Load Balancing: Dis tributingtraffic to multiple backend services. ∙ Authentication and Authorization: Integrating with external authentication services. Rate Limiting and Caching:Controlling traffic rates and caching responses.
How do you handle SSL termination with an Ingress ControllerAnswer:
+
SSL termination with an Ingress Controller can be managed by specifying TLSconfiguration in the Ingress resource. You can use Kubernetes secrets to store the TLScertificate and key, and reference them in your Ingress resource: yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress spec: tls: hosts: example.com secretName: example-tls rules: host: example.com http: paths: path: / pathType: Prefix backend: service: name: example-service port: number: 80
What are some best practices when configuring an Ingress ControllerAnswer:
+
Best practices include: Use TLS: Always secure trafficusing HTTPS. Limit Ingress Rules: Keep yourIngress resources simple and avoid over-complicating routing rules. Monitor and Log Traffic: Implementmonitoring and logging for performance analysis and debugging. Use Annotations: Leverageannotations for specific configurations like timeouts or custom error pages. Implement Rate Limiting: Protectbackend services from overloading by implementing rate limits.
How do you troubleshoot is sues with an Ingress ControllerAnswer:
+
To troubleshoot Ingress Controller is sues: Check Ingress Resource Configuration: Ensure the Ingress resource is correctly configured and points to theright service. Inspect Logs: Review logs from theIngress Controller pod for errors or mis configurations. Test Connectivity: Use tools likecurl to test connectivity to the service through the Ingress. Verify DNS Settings: Ensure thatDNS records point to the Ingress Controller's external IP. Check Service Health:Confirm that the backend services are running and healthy.
What is the role of annotations in an Ingress resourceAnswer:
+
Annotations in an Ingress resource allow you to configure specific behaviors andfeatures of the Ingress Controller. These can include settings for load balancingalgorithms, SSL configurations, rate limiting, and custom rewrite rules. Annotations canvary depending on the Ingress Controller being used.
Can you explain what a Virtual Service is in the context of IngressControllers
+
Answer: A Virtual Service, commonly associated with service mesh technologies like is tio,defines how requests are routed to services. While Ingress Controllers manage externaltraffic, Virtual Services allow more advanced routing, traffic splitting, and service-levelpolicies within the mesh. They provide finer controlover service interactions compared tostandard Ingress resources.
How do you secure your Ingress ControllerAnswer:
+
To secure an Ingress Controller, you can: Use TLS: Ensure all traffic is encrypted using TLS. Implement Authentication:Integrate authentication mechanis ms (e.g., OAuth, JWT). Restrict Access: Use networkpolicies to limit access to the Ingress Controller. Enable Rate Limiting: Protectagainst DDoS attacks by limiting incoming traffic rates. Keep Ingress Controller Updated:Regularly update to the latest stable version to mitigate vulnerabilities. Calico
What is Calico in KubernetesAnswer:
+
Calico is an open-source Container Network Interface (CNI)that provides high-performance networking and networksecurity for Kubernetes clusters. It enables IP-basednetworking, network policies, and integrates withBGP (Border Gateway Protocol) to route traffic efficiently.
What are the key features of CalicoAnswer:
+
BGP-based Routing:UsesBGP to dis tribute routes between nodes. Network Policies:Enforcesfine-grained security rules for inter-Pod communication. Support for Multiple Backends:Works with Linux kernel eBPF,VXLAN, and IP-in-IP encapsulation. Cross-Cluster Networking:Enables multi-cluster communication. IPv4 & IPv6 Dual-Stack Support:Allows clusters to use both IPv4 and IPv6.
How does Calico differ from other CNis like Flanneland CiliumAnswer:
+
FeatureCalicoFlannel Cilium NetworkingType Layer 3 BGP routing Layer 2 Overlay (VXLAN) eBPF-based Performance High (No encapsulation needed) Medium (Encapsulation overhead) High (eBPF is kernel-native) Network Policies Yes No Yes Encapsulation Optional (BGP preferred) VXLAN or IP-in-IP No encapsulation (eBPF) Ideal for Security-focused, scalableclusters Simple, lightweight clusters High-performance, modernnetworking
How does Calico handle Pod-to-Pod communicationAnswer:
+
Direct Routing (BGP Mode):Each node advertis es its Pod CIDR using BGP, allowing direct Pod-to-Podcommunication without encapsulation. Encapsulation (IP-in-IP or VXLANMode): If BGP is not available, Calicoencapsulates Pod traffic inside IP-in-IP or VXLAN tunnels. eBPF Mode: Uses eBPF toimprove packet processing speed and security.
What are the different Calico deployment modesAnswer:
+
BGP Mode: Uses BGP fordirect Pod-to-Pod communication. Overlay Mode (VXLAN or IP-in-IP): Encapsulates traffic for clusters without BGP support. eBPF Mode: Uses eBPFinstead of iptables for better performance.
How does Calico implement Network Policies in KubernetesAnswer:
+
Calico extends Kubernetes NetworkPolicy to enforce security rules.It supports: Ingress and Egress Rules: Controlincoming and outgoing traffic. Namespace is olation: Restrict Podcommunication between namespaces. Application-based Security:Enforce rules based on labels, CIDRs, and ports.
What is Felix in CalicoAnswer:
+
Felix is the primary Calico agent running on each node. Itprograms routes, security policies, and firewall rules using iptables,eBPF, or IPVS.
What is Typha in CalicoAnswer:
+
Typha is an optional component in Calico that optimizesscalability by reducing API load on the Kubernetes API server. It aggregates updates beforesending them to many Felix agents.
How does Calico use BGP for networkingAnswer:
+
Calico can integrate with BGP peers (e.g., routers, switches) toannounce Pod network CIDRs. Each node advertis es its assigned Pod IP range, allowing directrouting instead of overlay networks.
How do you install Calico in a Kubernetes clusterAnswer:
+
You can install Calico using kubectl,Helm, or operator-based deployment. InstallCalico in a single command: sh kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml 1. Verify installation: sh kubectl get pods -n calico-system 2. Check network status: sh calicoctl node status 3.
What command do you use to manage Calico networking
+
Answer: The calicoctl CLI is used for managing Calico networking. Examplecommands: View node status:calicoctl nodestatus Check BGP peers:calicoctl getbgppeer Lis t network policies:calicoctlget policy -o yaml
How do you create a Calico Network PolicyAnswer:
+
Example Calico NetworkPolicy to allow only trafficfrom Pods with label role=frontend: yaml apiVersion: projectcalico.org/v3 kind: NetworkPolicy metadata: name: allow-frontend namespace: default spec: selector: role == 'frontend' ingress: - action: Allow source: selector: role == 'backend' Apply the policy: sh kubectl apply -f calico-policy.yaml
How do you monitor Calico logsAnswer:
+
Felix logs:kubectl logs-n calico-system calico-node-xxxxx BGP routing logs:kubectllogs -n calico-system calico-bgp-daemon Check iptables rules:iptables -L -v -n
How does Calico provide multi-cluster networkingAnswer:
+
Calico supports cross-cluster networking using BGPpeering or Calico’s VXLAN overlay mode. It allows Pods indifferent clusters to communicate securely.
What are the security features of CalicoAnswer:
+
Network Policies: Controltraffic between Pods and external resources. Host Endpoint Policies:Secure nodes by restricting access. eBPF-based Security: UseseBPF for high-performance firewalling. WireGuard Encryption:Encrypts traffic between nodes.
How do you enable WireGuard encryption in CalicoAnswer:
+
WireGuard provides encrypted Pod-to-Pod communication. To enable it: sh calicoctl patch felixconfiguration default --type='merge' \ --patch='{"spec": {"wireguardEnabled": true}}' Verify: sh calicoctl get node --show-all
What are common troubleshooting steps for Calico networking is suesAnswer:
+
Check Pod IPs:kubectl getpods -o wide Verify Calico nodes:calicoctl node status Check if BGP peers are establis hed:calicoctl get bgppeer Check routes on the node:ip route Test connectivity:ping
How does Calico handle Service IPsAnswer:
+
Calico supports Kubernetes Services by integratingwith kube-proxy. If kube-proxy is not used, Calico’s eBPF modecan replace it for better performance.
How does Calico handle NAT in KubernetesAnswer:
+
BGP Mode:No NAT required;Pods get routable IPs. Overlay Mode (VXLAN/IP-in-IP):NAT is required to route external traffic. eBPF Mode:Eliminates NAToverhead and provides direct routing.
Can Calico be used outside KubernetesAnswer:
+
Yes, Calico can be used for networking in bare-metalservers, VMs, and hybrid cloud environments. It provides the same securityand networking policies across different environments. Infrastructure as Code (Terraform, Ansible) Terraform
What is Infrastructure as Code (IaC), and how doesit benefit a DevOps environment
+
Ans: Infrastructure as Code (IaC) refers to managing andprovis ioning computing infrastructure through machine-readable script files rather thanphysical hardware configuration or interactive configuration tools. Key benefits in a DevOpsenvironment include: Consis tency: Infrastructure configurations are consis tent acrossenvironments (development, testing, production), reducing errors due to configuration drift. Efficiency: Automation reduces manual intervention, speeding updeployment and scaling processes. Scalability: Easily replicate and scale infrastructure componentsas needed. Version Control: Infrastructure configurations can be versioned,tracked, and audited like application code. Collaboration: Enables collaboration between teams by providing acommon language and process for infrastructure management.
How do you manage cloud infrastructure with Terraform
+
Ans: Terraform is an IaC toolthat allows you to define and managecloud infrastructure as code. Here’s how you manage cloud infrastructure withTerraform: Define Infrastructure: Write Terraform configuration files (.tf)that describe the desired state of your infrastructure resources (e.g., virtual machines,networks, databases). Initialize: Use terraform init to initialize your workingdirectory and download necessary providers and modules. Plan: Execute terraform plan to create an execution plan, showingwhat Terraform will do to reach the desired state. Apply: Run terraform apply to apply the execution plan,provis ioning the infrastructure as defined in your configuration. Update and Destroy: Terraform can also update exis tinginfrastructure (terraform apply again with changes) and destroy resources (terraformdestroy) when no longer needed.
Can you explain the difference between Terraform and Ansible
+
Ans: TerraformandAnsible are both tools used in DevOps and automation but servedifferent purposes: Terraform: Focuses on provis ioning and managing infrastructure. Ituses declarative configuration files (HCL) to define the desired state of infrastructureresources across various cloud providers and services. Terraform manages the entire lifecycle: create, modify, and delete. Ansible: Primarily a configuration management toolthat focuses onautomating the deployment and configuration of software and services on exis ting servers.Ansible uses procedural Playbooks (YAML) to describe automation tasks and does not manageinfrastructure provis ioning like Terraform.
How do you handle versioning in Infrastructure as Code
+
Ans: Handling versioning in Infrastructure as Code is crucial formaintaining consis tency and enabling collaboration: Version ControlSystems: Store IaC files (e.g., Terraform .tffiles) in a version controlsystem (e.g., Git) to track changes, manage versions, and enablecollaboration among team members. Commit and Tagging: Use meaningful commit messages and tags todenote changes and versions of infrastructure configurations. Release Management: Implement release branches or tags fordifferent environments (e.g., development, staging, production) to manage configurationchanges across environments. Automated Pipelines: Integrate IaC versioning with CI/CD pipelinesto automate testing, deployment, and rollback processes based on versioned configurations.
What challenges did you face with configuration management tools
+
Ans: Challenges with configuration management tools like Ansibleor Chef often include: Complexity: Managing large-scale infrastructure and dependenciescan lead to complex configurations and playbooks. Consis tency: Ensuring consis tency across different environments(e.g., OS versions, package dependencies) can be challenging. Scalability: Adapting configuration management to scale asinfrastructure grows or changes. Security: Handling sensitive information (e.g., credentials, keys)securely within configuration management tools. Integration: Integrating with exis ting systems and tools withinthe organization's ecosystem. Addressing these challenges typically involves careful planning, modular design ofplaybooks or recipes, automation, and robust testing practices to ensure reliability andsecurity of managed infrastructure.
What is a private module regis try in Terraform
+
A private regis try hosts Terraform modules inside yourorganization, allowing controlled sharing across teams. Example: Terraform Cloud,Artifactory.
If you delete the local Terraform state file and it's not storedin S3 or DynamoDB, how can you recover it
+
You cannot recover it unless you have backups. Ifstored remotely, pull it with: terraform state pull
How do you import resources into Terraform
+
Use terraform import to bring exis ting infrastructure into Terraform state:terraform import aws_instance.example i-1234567890abcdef0
What is a dynamic block in Terraform
+
A dynamic block is used to generate multiple nested blocks dynamically: dynamic "ingress" { for_each = var.ingress_rules content { from_port = ingress.value.port to_port = ingress.value.port protocol="tcp" } }
How can you create EC2 instances in two different AWS accountssimultaneously using Terraform
+
Use multiple provider aliases: provider "aws" { alias = "account1" profile ="profile1" } provider "aws" { alias = "account2" profile = "profile2" } resource "aws_instance" "server1" { provider = aws.account1 } resource "aws_instance" "server2" { provider = aws.account2 }
How do you handle an error stating that the resource already exis ts whencreating resources with Terraform
+
Use terraform import to bring the resource into Terraform state.
How does Terraform refresh work
+
terraform refresh updates the state file with real-worldinfrastructure changes.
How would you upgrade Terraform plugins
+
Run: terraform init -upgrade Ansible Basic Questions
What is Ansible, and why is it used
+
Ansible is an open-source automation toolused for configuration management,application deployment, and task automation. It is agentless and operates using SSH orWinRM.
What are the main components of Ansible
+
ControlNode: Themachine where Ansible runs Managed Nodes:Servers managed by Ansible Inventory: A filelis ting managed nodes Modules: Predefinedcommands for automation Playbooks: YAML-basedscripts for automation Plugins: ExtendAnsible’s functionality
What makes Ansible different from other automation tools
+
Agentless (uses SSH/WinRM) Push-based automation YAML-based Playbooks for easy readability
What is an Ansible Playbook
+
A Playbook is a YAML file that defines automation tasks to configure systems,deploy applications, or manage IT infrastructure.
What is the purpose of an Inventory file
+
An inventory file defines managed hosts and groups. It can be static (manual) ordynamic (retrieved from cloud providers like AWS or Azure). Intermediate Questions
What is Ansible Vault, and how is it used
+
Ansible Vault encrypts sensitive data. Commands include: ansible-vault createsecrets.yml ansible-vault encrypt secrets.yml ansible-vault decrypt secrets.yml
How do you use Handlers in Ansible
+
Handlers are executed only when notified. Example: yaml tasks: - name: Update config template: src: config.j2 dest: /etc/app/config notify: Restart app handlers: - name: Restart app service: name: myapp state: restarted
What is Dynamic Inventory
+
Dynamic Inventory fetches host data from external sources like AWS, Azure, or adatabase.
What is gather_facts in Ansible
+
gather_facts collects system information such as OS, IP addresses, etc. It can bedis abled: yaml gather_facts: no
How do you loop tasks in Ansible
+
Use with_items: yaml tasks: name: Install packages apt: name: "{{ item }}" with_items: nginx git
How do you manage dependencies in Ansible Roles
+
Define dependencies in meta/main.yml: yaml dependencies: role: common role: webserver Advanced Questions
What is delegate_to, and how is it used
+
delegate_to runs a task on a different host: yaml tasks: name: Run command on another server command: uptime delegate_to: 192.168.1.100
How do you ensure idempotency in Ansible
+
Ansible modules ensure that tasks run only if changes are required, avoidingredundant actions.
What are Lookup Plugins
+
Lookup plugins retrieve data dynamically: yaml tasks: name: Read file content debug: msg: "{{ lookup('file', '/path/to/file.txt') }}"
What is the difference between vars, vars_files, and vars_prompt
+
vars: Inline variable declaration vars_files: External variable files vars_prompt: Prompt user for input
How do you debug Ansible Playbooks
+
Use -v, -vv, or -vvv for verbose output Use the debug module: yaml tasks: debug: var: my_variable
What is the purpose of block, rescue, and alwaysThese handle errors gracefully:
+
yaml tasks: block: name: Try something command: /bin/true rescue: name: Handle failure debug: msg: "Something went wrong" always: name: Cleanup debug: msg: "Cleanup actions" Scenario-Based Questions Scenario: Install a specific package version on some hostsand remove it from others yaml tasks: name: Install nginx apt: name: nginx=1.18.0 state: present when: "'install_nginx' in group_names" name: Remove nginx apt: name: nginx state: absent when: "'remove_nginx' in group_names" Scenario: Managing different environments (dev, staging, production) Use group_vars/ for environment-specific variables Use separate inventory files (inventory_dev,inventory_staging) Pass environment variables: ansible-playbook site.yml -e "env=staging" Scenario: Ensure a file exis ts with specific content and permis sions yaml tasks: name: Create a file copy: dest: /tmp/example.txt content: "Hello, World!" owner: root group: root mode: '0644'
Troubleshooting & Optimization How to speed up slow tasks
+
Increase forks in ansible.cfg Use async and poll for background execution Dis able fact gathering if not needed: yaml gather_facts: no
How do you handle SSH authentication is sues
+
Use key-based SSH authentication Test connection: ansible all -m ping
How do you test a Playbook without making changes
+
Use --check for a dry run: sh ansible-playbook site.yml --check Mis cellaneous Questions
What is the difference between include_tasks andimport_tasks
+
include_tasks: Includes dynamically at runtime import_tasks: Includes statically at parse time
What are Ansible Filters
+
Filters modify variables: yaml tasks: debug: msg: "{{ mylis t | join(', ') }}"
How do you optimize Ansible Playbooks
+
Use when conditions to skip unnecessary tasks Use async for long-running tasks Use tags to run specific tasks
What is the purpose of roles_path in ansible.cfg
+
It defines where Ansible looks for roles.
How do you use the regis ter keyword
+
regis ter stores task output in a variable: yaml tasks: name: Check free dis k space command: df -h regis ter: dis k_space debug: var: dis k_space.stdout
What is the purpose of become, and how is it used
+
become enables privilege escalation: yaml tasks: name: Install nginx apt: name: nginx state: present become: yes Cloud Computing (AWS, Azure) AWS
What cloud platforms have you worked with (AWS)AWSServices: Mention specific AWS servicesyou've used, such as:
+
EC2 (Elastic Compute Cloud) for scalable virtual servers.S3 (Simple Storage Service) for object storage. RDS (Relational Database Service) for managed databases. Lambdafor serverlesscomputing. VPC (Virtual Private Cloud) for network is olation. CloudFormationforInfrastructure as Code (IaC). EKS (Elastic Kubernetes Service) for managing Kubernetes clusters.
How do you ensure high availability and scalability in the cloudAns:High Availability:
+
Multi-Availability Zones:Deploy applications across multiple availability zones (AZs) to ensureredundancy. Load Balancing: UseElastic Load Balancing (ELB) to dis tribute incoming traffic acrossmultiple instances. Auto Scaling: Set upAuto Scaling Groups (ASG) to automatically adjust the number ofinstances based on demand. Scalability : Horizontal Scaling: Add orremove instances based on workload demands. Use of Services: Leverageservices like RDS Read Replicas or DynamoDB fordatabase scalability. Caching: Implement cachingstrategies using Amazon ElastiCache to reduce database load and improveresponse times.
What are the best practices for securing cloud infrastructureAns:Identity and Access Management (IAM):
+
Use IAM Roles and Policies to controlaccess toresources, following the principle of least privilege. Encryption : Enable encryption for data at rest (e.g., using S3server-side encryption) and in transit (e.g., using SSL/TLS). Network Security : Use Security Groups and Network ACLsto controlinbound and outbound traffic. Consider using AWS WAF (Web Application Firewall) to protect webapplications from common threats. Monitoring and Logging : Implement AWS CloudTrail and AmazonCloudWatch for logging and monitoring activities in your AWSaccount. Regular Audits : Conduct regular security assessments and audits to identify vulnerabilities andensure compliance with best practices.
Can you explain how to set up auto-scaling for an application
+
Ans: Auto-scaling in AWS allows your application to automaticallyscale its resources up or down based on demand. Here's a step-by-step guide on how toset up auto-scaling for an application: Step-by-Step Process: Launch an EC2 Instance: Start by creating an EC2 instance that will serve as the template for scaling.Install your application and configure it properly. Create a Launch Template or Configuration : Go to EC2 Dashboard and create a LaunchTemplate or Launch Configuration. This template definesthe AMI, instance type, security groups, key pairs, and user data scripts that will beused to launch new instances. Create an Auto Scaling Group (ASG) : Navigate to Auto Scaling in the EC2 dashboard andcreate an Auto Scaling Group. Specify the launch template or configuration that you created. Choose the VPC, subnets, and availability zones where theinstances will be deployed. Define Scaling Policies : Set the minimum, maximum, and desired number ofinstances. Define scaling policies based on metrics (e.g., CPU utilization, memory, networktraffic): Target Tracking Policy: Automatically adjusts the number ofinstances to maintain a specific metric (e.g., keep CPU utilization at 50%). Step Scaling Policy: Adds orremoves instances in steps based on metric thresholds. Scheduled Scaling: Scale upor down based on a specific time schedule. Attach a Load Balancer (Optional) : If you want to dis tribute traffic across the instances, attach an ElasticLoad Balancer (ELB) to the Auto Scaling group. This ensures incoming requestsare spread across all active instances. Monitor and Fine-Tune : Use CloudWatch to monitor the performance of your Auto Scalinggroup and fine-tune your scaling policies to better match the application’s workload. Benefits: Elasticity: Automatically scale in response to traffic spikes ordrops. High Availability: Instances can be spread across multipleavailability zones for redundancy. Cost Efficiency: Pay only for the resources you use, preventingover provis ioning.
What is the difference between IaaS, PaaS, and SaaS
+
Ans: These three terms describe different service models in cloudcomputing, each offering varying levels of management and control: IaaS (Infrastructure as a Service): Definition: Provides virtualized computing resources over theinternet. It includes storage, networking, and virtual servers but leaves the management ofthe OS, runtime, and applications to the user. Example : Amazon EC2 , Google ComputeEngine , Microsoft Azure Virtual Machines . Use Case: When you want complete controlover your infrastructurebut want to avoid managing physical servers. Responsibilities : Cloud Provider: Manages hardware, storage, networking, andvirtualization. User: Manages operating systems, middleware, applications, anddata. PaaS (Platform as a Service): Definition: Offers a development platform, allowing developers tobuild, test, and deploy applications without worrying about managing the underlying infrastructure (servers, OS, databases). Example : AWS Elastic Beanstalk , GoogleApp Engine , Heroku . Use Case: When you want to focus on developing applicationswithout managing infrastructure. Responsibilities : Cloud Provider: Manages servers, storage, databases, operatingsystems, and runtime environments. User: Manages the application and its data. SaaS (Software as a Service): Definition: Delivers fully managed software applications over theinternet. The cloud provider manages everything, and the user only interacts with theapplication itself. Example : Google Workspace , MicrosoftOffice 365 , Salesforce , Dropbox . Use Case: When you need ready-to-use applications without worryingabout development, hosting, or maintenance. Responsibilities : Cloud Provider: Manages everything from infrastructure to theapplication. User: Uses the software to accomplis h tasks. Key Differences: Model ControlUse Case Examples IaaS Full controlover VMs, OS, etc. When you need virtual servers or storage. Amazon EC2, Azure VMs, GCE PaaS Controlover the as-is When you want to build/deploy without Heroku, AWS Elastic Beanstalk application SaaS Least control, use managing infrastructure. When you need ready made applications. Google Workspace, Office 365, Model ControlUse Case ExamplesSalesforce Each model offers different levels of flexibility, control, and maintenancedepending on the requirements of the business or application.
How can we enable communication between 500 AWS accounts internally
+
Use AWS Transit Gateway or VPCpeering.
How to configure a solution where a Lambda function triggers on an S3upload and updates DynamoDB
+
Use S3 Event Notification → Trigger Lambda→ Write to DynamoDB.
What is the standard port for RDP
+
3389
How do you configure a Windows EC2 instance to join an ActiveDirectory domain
+
Configure AWS Directory Service and use AWSSystems Manager.
How can you copy files from a Linux server to an S3 bucket
+
Using AWS CLI: aws s3 cp file.txt s3://my-bucket/
What permis sions do you need to grant for that S3 bucket
+
s3:PutObject for uploads.
What are the different types of VPC endpoints and when do you use them
+
Interface Endpoint(for AWSservices like S3, DynamoDB). Gateway Endpoint(used for S3 andDynamoDB).
How to resolve an image pullback error when using an Alpine image pushedto ECR in a pipeline
+
Check authentication: Run aws ecrget-login-password.
What is the maximum size of an S3 object
+
5TB.
What encryption options do we have in S3
+
SSE-S3, SSE-KMS,SSE-C, and Client-side encryption.
Can you explain IAM user, IAM role, and IAM group in AWS
+
IAM User: A user account with AWSpermis sions. IAM Role: A temporary permis sionset assigned to users/services. IAM Group: A collection of IAMusers.
What is the difference between an IAM role and an IAM policy document
+
IAM Role: Assigns permis sionsdynamically. IAM Policy: Defines what actionsare allowed.
What are inline policies and managed policies
+
Inline Policy: Directly attachedto a user/role. Managed Policy: A reusable policyacross multiple entities.
How can we add a load balancer to Route 53
+
Create ALB/NLB, then create an Alias Recordin Route 53.
What are A records and CNAME records
+
A Record: Maps a domain to anIP. CNAME Record: Maps a domain toanother domain.
What is the use of a target group in a load balancer
+
Routes traffic to backend instances.
If a target group is unhealthy, what might be the reasons
+
Wrong health check settings, instanceis sues, security group blocking traffic AWS Networking Questions for DevOps
What is a VPC in AWS
+
A VPC is a private, is olated network within AWS to launch and manageresources securely.
How do Security Groups work in AWS
+
Security Groups are virtual firewalls that controlinbound andoutbound traffic to instances in a VPC.
What is an Internet Gateway in AWS
+
An Internet Gateway enables internet connectivity for resources in aVPC's public subnets.
What is a NAT Gateway
+
A NAT Gateway allows private subnet instances to access the internetwithout exposing them to inbound traffic.
What is Route 53
+
Route 53 is AWS’s DNS service, used for routing and failoverconfigurations to enhance application availability.
What is an Elastic Load Balancer(ELB)
+
ELB dis tributes incoming traffic across instances, supportingscalability and fault tolerance.
What is AWS PrivateLink
+
PrivateLink provides private connectivity between VPCs and AWSservices, bypassing the public internet.
What is a Transit Gateway
+
Transit Gateway connects VPCs and on-premis es networks via a centralhub, simplifying complex networks.
What are Subnets in AWS
+
Subnets are segments within a VPC used to organize resources andcontroltraffic flow.
What is AWS Direct Connect
+
Direct Connect provides a dedicated, low-latency connection betweenAWS and on-premis es data centers.
What is VPC Peering
+
VPC Peering enables direct communication between two VPCs, oftenused to connect different environments.
What is an Egress-Only InternetGateway
+
It allows IPv6 traffic to exit a VPC while blocking unsolicitedinbound traffic.
Difference between Security Groups and NetworkACLs
+
Security Groups are instance-level, stateful firewalls, whileNetwork ACLs are subnet-level, stateless firewalls.
What is AWS Global Accelerator
+
Global Accelerator directs traffic through AWS’s globalnetwork, reducing latency and improving performance.
How do you monitor network traffic inAWS
+
AWS tools like VPC Flow Logs and CloudWatch allow for traffic monitoring andlogging within VPCs. AZURE
What is Microsoft Azure, and what are its primaryuses
+
Answer:Microsoft Azure is a cloud computing platform and service created by Microsoft, offering arange of cloud services, including computing, analytics, storage, andnetworking. Users can pick and choose these services to develop and scalenew applications or run exis ting ones in the public cloud. Primary usesinclude virtual machines, app services, storage services, anddatabases.
What are Azure Virtual Machines, and why are they used
+
Answer:Azure VirtualMachines (VMs) are scalable, on-demand compute resources provided byMicrosoft. They allow users to deploy and manage software within a controlled environment, similar to an on premis e server. AzureVMs are used for various purposes, like testing and developing applications, hostingwebsites, and creating cloud-based environments for data processing or analytics.
What is Azure Active Directory (Azure AD)
+
Answer:Azure ActiveDirectory is Microsoft’s cloud-based identity and access managementservice. It helps organizations manage user identities and provides secureaccess to resources and applications. Azure AD offers features like singlesign-on (SSO), multifactor authentication, and conditional access to protectagainst cybersecurity threats. Explain Azure Functions and when they are used. Answer:Azure Functions is a serverless compute service that enables users to run event-driven codewithout managing infrastructure. It is used for microservices, automationtasks, scheduled data processing, and other scenarios that benefit fromrunning short, asynchronous, or stateless operations.
What is an Azure Resource Group
+
Answer:An Azure ResourceGroup is a container that holds related resources for an Azure solution,allowing for easier organization, management, and deployment of assets. Allresources within a group share the same lifecycle, permis sions, andpolicies, making it simpler to controlcosts and streamlinemanagement.
What are Availability Sets in Azure
+
Answer:Availability Setsare a feature in Azure that ensures VM reliability by dis tributing VMsacross multiple fault and update domains. This configuration helps reducedowntime during hardware or software failures by ensuring that at least oneinstance remains accessible, which is especially useful forhigh-availability applications.
How does Azure handle scaling of applications
+
Answer:Azure offers twotypes of scaling options: Vertical Scaling (Scaling Up):Increasing the resources, such as CPUor RAM, of an exis ting server. Horizontal Scaling (Scaling Out):Adding more instances to handle increased load.Azure Autoscale automatically adjusts resources based on predefined rules orconditions, making it ideal for handling fluctuating workloads.
What is Azure DevOps, and what are its main features
+
Answer:Azure DevOps is asuite of development tools provided by Microsoft for managing softwaredevelopment and deployment workflows. Key features include Azure Repos(version control), Azure Pipelines (CI/CD), Azure Boards (agile planning andtracking), Azure Artifacts (package management), and Azure Test Plans(automated testing).
What are Azure Logic Apps
+
Answer:Azure Logic Appsis a cloud-based service that helps automate and orchestrate workflows,business processes, and tasks. It provides a vis ual designer to connectdifferent services and applications without writing code. Logic Apps areoften used for automating repetitive tasks, such as data integration,notifications, and content management.
What is Azure Kubernetes Service (AKS), and why is it important
+
Answer:Azure KubernetesService (AKS) is a managed Kubernetes service that simplifies deploying,managing, and scaling containerized applications using Kubernetes on Azure.AKS is significant because it offers serverless Kubernetes, an integratedCI/CD experience, and enterpris e-grade security, allowing teams to managecontainerized applications more efficiently and reliably.
What is Azure Blob Storage, and what are the types of blobs
+
Answer:Azure Blob Storageis a scalable object storage solution for unstructured data, such as text orbinary data. It’s commonly used for storing files, images, videos,backups, and logs. The three types of blobs are: Block Blob:Optimized for storing large amounts of text or binarydata. Append Blob:Idealfor logging, as it’s optimized for appendingoperations. Page Blob:Usedfor scenarios with frequent read/write operations, such as storing virtual hard dis k (VHD) files.
What is Azure Cosmos DB, and what are its key features
+
Answer:Azure Cosmos DB is a globally dis tributed, multi-model database service that provideslow-latency, scalable storage for applications. Key features includeautomatic scaling, support for multiple data models (like document,key-value, graph, and column-family), and a global dis tribution model thatreplicates data across Azure regions for improved performance andavailability.
How does Azure manage security for resources, and what is Azure SecurityCenter
+
Answer:Azure SecurityCenter is a unified security management system that provides threatprotection for resources in Azure and on-premis es. It monitors securityconfigurations, identifies vulnerabilities, applies security policies, andhelps detect and respond to threats with advanced analytics. Azure also usesrole-based access control(RBAC), network security groups (NSGs), andvirtual network (VNet) is olation to enforce security at differentlevels.
What is an Azure Virtual Network (VNet), and how is it used
+
Answer:Azure VirtualNetwork (VNet) is a networking service that allows users to create privatenetworks in Azure. VNets enable secure communication between Azure resourcesand can be connected to on premis es networks using VPNs or ExpressRoute.They support subnetting, network security groups, and VNet peering tooptimize network performance and security.
Can you explain Azure Traffic Manager and its routing methods
+
Answer:Azure TrafficManager is a DNS-based load balancer that directs incoming requests todifferent endpoints based on configured routing rules. It helps ensure highavailability and responsiveness by routing traffic to the best-performingendpoint. The primary routing methods include: Priority:Routestraffic to the primary endpoint unless it’sunavailable. Weighted:Dis tributes traffic based on assigned weights. Performance:Routes traffic to theendpoint with the best performance. Geographic:Routes users toendpoints based on their geographic location.
What is Azure Application Gateway, and how does it differ from LoadBalancer
+
Answer:Azure ApplicationGateway is a web traffic load balancer that includes application layer(Layer 7) routing features, such as SSL termination, URL-based routing, andsession affinity. It’s ideal for managing HTTP/HTTPS traffic. Incontrast, Azure Load Balancer operates at Layer 4 (Transport) and is designed for dis tributing network traffic based on IP protocols. ApplicationGateway is more suitable for managing web applications, while Load Balanceris used for general network-level load balancing.
What is Azure Policy, and why is it used
+
Answer:Azure Policy is aservice for enforcing organizational standards and assessing compliance atscale. It allows adminis trators to create and apply policies that controlresources in a specific way, such as restricting certain VM types orensuring specific tags are applied to resources. Azure Policy ensuresgovernance by enforcing rules across resources in a consis tentmanner.
How do Azure Availability Zones ensure high availability
+
Answer:Azure AvailabilityZones are physically separate locations within an Azure region, designed toprotect applications and data from data center failures. Each zone is equipped with independent power, cooling, and networking, allowing for thedeployment of resources across multiple zones. By dis tributing resourcesacross zones, Availability Zones provide high availability and resilienceagainst regional dis ruptions.
What is Azure Key Vault, and what does it manage
+
Answer:Azure Key Vault is a cloud service that securely stores and manages sensitive information, suchas secrets, encryption keys, and certificates. It helps enhance security bycentralizing the management of secrets and enabling policies for accesscontrol, logging, and auditing. Key Vault is essential for applicationsneeding a secure way to store sensitive information. Explain the difference between Azure CLI and Azure PowerShell. Answer:Both Azure CLI andAzure PowerShell are tools for managing Azure resources via commands. Azure CLI:Across-platform command-line tooloptimized for handling common Azuremanagement tasks. Commands are simpler, especially for thosefamiliar with Linux-style command line interfaces. Azure PowerShell:Amodule specifically for managing Azure resources in PowerShell, integrating wellwith Windows environments and offering detailed scripting and automation capabilities.
What is Azure Service Fabric
+
Answer:Azure ServiceFabric is a dis tributed systems platform that simplifies the packaging,deployment, and management of scalable microservices. It’s used forbuilding high-availability, low-latency applications that can be scaledhorizontally. Service Fabric manages complex problems like statefulpersis tence, workload balancing, and fault tolerance, making it suitable formis sion-critical applications.
What is the purpose of Azure Monitor
+
Answer:Azure Monitor is acomprehensive monitoring solution that collects and analyzes data from Azureand on-premis es environments. It provides insights into application performance, resource health, and potentialis sues. Azure Monitor includes features like Application Insights (for app performance monitoring) and Log Analytics (for querying andanalyzing logs) to provide end-to-end vis ibility.
What is Azure Site Recovery, and how does it work
+
Answer:Azure SiteRecovery is a dis aster recovery service that replicates workloads running onVMs and physical servers to a secondary location. It automates failover andfailback during outages to ensure business continuity. Site Recoverysupports both Azure-to-Azure and on premis es-to-Azure replication, providing a cost-effective solution for dis asterrecovery planning.
What is Azure Container Instances (ACI), and how does it compare to AKS
+
Answer:Azure ContainerInstances (ACI) is a service that allows users to quickly deploy containersin a fully managed environment without managing virtual machines. UnlikeAzure Kubernetes Service (AKS), which is a managed Kubernetes service fororchestrating complex container workloads, ACI is simpler and used forsingle-container deployments, such as lightweight or batch jobs. Explain Azure Logic Apps vs. Azure Functions. Answer: Azure Logic Apps:A workflow-based service ideal for automating businessprocesses and integrations, with a vis ual designer that allows fordrag-and-drop configurations. Azure Functions:Aserverless compute service designed for event-driven execution andcustom code functions. It’s useful for tasks that require morecomplex logic but are limited to a single operation.
What is Azure Private Link, and why is it used
+
Answer:Azure Private Linkenables private access to Azure services over a private endpoint within avirtual network (VNet). It ensures traffic between the VNet and Azureservices doesn’t travel over the internet, enhancing security andreducing latency. Private Link is useful for securing access to serviceslike Azure Storage, SQL Database, and your own PaaS services.
What is Azure ExpressRoute, and how does it differ from a VPN
+
Answer:Azure ExpressRouteis a private connection between an on premis es environment and Azure,bypassing the public internet for improved security, reliability, and speed.Unlike a VPN, which operates over the internet, ExpressRoute uses adedicated circuit, making it ideal for workloads requiring high-speedconnections and consis tent performance.
What is Azure Bastion, and when should it be used
+
Answer:Azure Bastion is amanaged service that allows secure RDP and SSH connectivity to Azure VMsover the Azure portal, without needing a public IP on the VM. It provides amore secure method of accessing VMs, as it uses a hardened service that mitigates exposure to potential attacksassociated with public internet access.
What is Azure Event Grid, and how does it work
+
Answer:Azure Event Gridis an event routing service for managing events across different services.It uses a publis h-subscribe model to route events from sources like Azureresources or custom sources to event handlers (subscribers) like AzureFunctions or Logic Apps. Event Grid is useful for building event-drivenapplications that respond to changes in real-time.
What are Azure Blueprints, and how do they benefit governance
+
Answer:Azure Blueprintsenable organizations to define and manage a repeatable set of Azureresources that adhere to organizational standards and policies. Blueprintsinclude templates, role assignments, policy assignments, and resourcegroups. They’re beneficial for governance because they enforcecompliance and consis tency in resource deployment acrossenvironments. Explain the difference between Azure Policy and Azure Role-Based AccessControl(RBAC). Answer: Azure Policyenforces specific rules and requirements on resources, likeensuring certain tags are applied or restricting resource types. Itfocuses on resource compliance. Azure RBACmanagesuser and role permis sions for resources, controlling who has accessand what actions they can perform. RBAC focuses on accessmanagement.
What is Azure Data Lake, and how is it used
+
Answer:Azure Data Lake is a storage solution optimized for big data analytics workloads. It provideshigh scalability, low-cost storage for large volumes of data, and can storestructured, semi-structured, and unstructured data. Data Lake integrateswith analytics tools like Azure HDInsight, Azure Databricks, and AzureMachine Learning for complex data processing and analysis .
What is Azure Synapse Analytics
+
Answer:Azure SynapseAnalytics, formerly known as Azure SQL Data Warehouse, is an analytics servicethat brings together big data and data warehousing. It enables data ingestion,preparation, management, and analysis in one unified environment. Synapseintegrates with Spark, SQL, and other analytics tools, making it ideal forcomplex data analytics and business intelligence solutions.
What is the purpose of Azure Sentinel
+
Answer:Azure Sentinel is a cloud-native Security Information and Event Management (SIEM) toolthatprovides intelligent security analytics across enterpris e environments. Itcollects, detects, investigates, and responds to security threats using AIand machine learning, making it an essential toolfor organizations focusedon proactive threat detection and response.
What are Network Security Groups (NSGs) in Azure, and how do they work
+
Answer:Network SecurityGroups (NSGs) are firewall-like controls in Azure that filter networktraffic to and from Azure resources. NSGs contain security rules that allowor deny inbound and outbound traffic based on IP addresses, port numbers,and protocols. They’re typically used to secure VMs, subnets, and other resources within a virtual network.
What is Azure Dis k Encryption
+
Answer:Azure Dis kEncryption uses BitLocker (for Windows) and DM Crypt (for Linux) to provideencryption for VMs’ data and operating system dis ks. It integrateswith Azure Key Vault to manage and controlencryption keys, ensuring thatdata at rest within the VM dis ks is secure and meets compliancerequirements.
What is Azure Traffic Analytics, and how does it work
+
Answer:Azure TrafficAnalytics is a network traffic monitoring solution built on Azure NetworkWatcher. It provides vis ibility into the network activity by analyzing flow logs from Network Security Groups, giving insights into trafficpatterns, network latency, and potential security threats. It’s commonly used fordiagnosing connectivity is sues, optimizing performance, and monitoring security.
What is Azure Resource Manager (ARM), and why is it important
+
Answer:AzureResource Manager (ARM) is the deployment and management service for Azureresources. It enables users to manage resources through templates (JSON-based),allowing infrastructure as code. ARM organizes resources in resource groups andprovides access control, tagging, and policy application at a centralized level,simplifying resource deployment and management. Explain Azure Cost Management and its key features. Answer:Azure CostManagement is a toolthat provides insights into cloud spending and usageacross Azure and AWS resources. Key features include cost analysis ,budgeting, alerts, recommendations for cost-saving, and tracking spendingtrends over time. It helps organizations monitor, control, and optimizetheir cloud costs.
What is Azure Lighthouse, and how is it used
+
Answer:Azure Lighthouseis a management service that enables service providers or enterpris es tomanage multiple tenants from a single portal. It offers secure access tocustomer resources, policy enforcement, and role-based access across environments. Azure Lighthouse is particularly useful formanaged service providers (MSPs) managing multiple client subscriptions.
What is the difference between Azure Table Storage and Azure SQL Database
+
Answer: Azure Table Storageis a NoSQL key-value storage service that’s designedfor structured data. It’s best for storing large volumes ofsemi-structured data without complex querying. o Azure SQL Database is a fully managed relational databaseservice based on SQL Server. It’s suitable for transactional applications requiring complex querying, relationships, and constraints.
What is Azure Multi-Factor Authentication (MFA), and why is it important
+
Answer:Azure Multi-FactorAuthentication adds an additional layer of security by requiring a secondverification step for user logins (such as SMS, phone call, or app notification). It reduces the ris k of unauthorized access toaccounts, especially for sensitive or privileged accounts.
What is Azure API Management, and how does it help in managing APis
+
Answer:Azure APIManagement is a service that allows organizations to create, publis h,secure, and monitor APis . It provides a centralized hub to manage APIversioning, access control, usage analytics, and developer portals, helpingteams controlaccess to APis and enhance the developer experience. Explain the concept of Azure Automation. Answer:Azure Automationis a service that automates tasks across Azure environments, like VMmanagement, application updates, and configuration management. It usesrunbooks (PowerShell scripts, Python, etc.) to automate repetitive tasks andsupports workflows for handling complex processes. It helps save time andreduces errors in managing Azure resources.
What is Azure CDN, and when should it be used
+
Answer:Azure ContentDelivery Network (CDN) is a global cache network designed to deliver contentto users faster by caching files at edge locations close to users.It’s commonly used to improve the performance of websites andapplications, reducing latency for delivering static files, streaming media,and other content-heavy applications.
What is Azure AD B2C, and how does it differ from Azure AD
+
Answer:Azure AD B2C(Business-to-Consumer) is a service specifically for authenticating andmanaging identities for customer-facing applications, allowing externalusers to sign in with social or local accounts. Unlike Azure AD, which is designed for corporate identity management and secure access to internal resources, Azure AD B2C is tailored forapplications interacting with end customers.
What is Azure Data Factory, and what is it used for
+
Answer:Azure Data Factory(ADF) is a data integration service for creating, scheduling, and managing data workflows. It’s used for data extraction,transformation, and loading (ETL) processes, enabling data movement and transformationacross on-premis es and cloud environments, integrating with services like Azure SQLDatabase, Azure Blob Storage, and others.
What is Azure Machine Learning, and what are its key capabilities
+
Answer:Azure MachineLearning is a cloud-based service for building, training, deploying, andmanaging machine learning models. It supports automated ML, experimenttracking, model versioning, and scalable deployment options. It’svaluable for data scientis ts and developers looking to integrate machinelearning into applications without extensive infrastructuremanagement.
What is a VNet (Virtual Network) inAzure
+
VNet is a private network in Azure to securely connect and manageresources.
What are Network Security Groups (NSGs) inAzure
+
NSGs filter inbound/outbound traffic to Azure resources, acting asvirtual firewalls.
What is an Application Gateway inAzure
+
Application Gateway is a Layer 7 load balancer with WAF protectionfor application routing.
How does Azure Load Balancer work
+
Azure Load Balancer dis tributes traffic among VMs to enhanceavailability and reliability.
What is Azure Traffic Manager
+
Traffic Manager is a DNS-based service that routes traffic acrossAzure regions globally.
What is a VPN Gateway in Azure
+
A VPN Gateway enables secure, encrypted connections between AzureVNets and on-premis es networks.
What is Azure ExpressRoute
+
ExpressRoute provides a private, high-bandwidth connection betweenAzure and on-premis es data centers.
What is a Peering Connection inAzure
+
VNet Peering connects two VNets within or across Azure regions fordirect communication.
What is Azure Bastion
+
Azure Bastion provides secure RDP and SSH access to VMs without apublic IP address.
What is an Application Security Group(ASG)
+
ASGs allow grouping of VMs for simplified network securitymanagement within VNets.
What is an Azure Private Link
+
Private Link provides private connectivity to Azure services over aVNet, bypassing the public internet.
What are Subnets in Azure
+
Subnets segment a VNet to organize resources and controlnetworkaccess and routing.
What is an Azure Public IP Address
+
A public IP allows Azure resources to communicate with theinternet.
What is a Route Table in Azure
+
Route tables define custom routing rules to controltraffic flowwithin VNets.
What is Azure DNS
+
Azure DNS is a domain management service providing high availabilityand fast DNS resolution.
What is Azure Front Door
+
Azure Front Door is a global load balancer and CDN for secure, fast,and reliable access.
What is a Service Endpoint in Azure
+
Service Endpoints provide private access to Azure services fromwithin a VNet.
What is a DDoS Protection Plan inAzure
+
Azure DDoS Protection safeguards against dis tributeddenial-of-service attacks.
What is Azure Monitor NetworkInsights
+
Network Insights provide a unified view of network health and helpwith troubleshooting.
What is a Network Virtual Appliance (NVA) inAzure
+
An NVA is a VM that provides advanced networking functions, likefirewalls, within Azure. Monitoring and Logging (Prometheus & Grafana, ELK Stack,Splunk) Prometheus & Grafana
What is Prometheus
+
Prometheus is an open-source monitoring and alerting toolkit designed forreliability and scalability. It collects and stores time-series data using a pull model overHTTP and provides a flexible query language called PromQL for analysis .
What are the main components of Prometheus
+
Prometheus Server– Collectsand stores time-series metrics Exporters– Expose metricsfrom applications or systems Pushgateway– Supportsshort-lived jobs to push metrics Alertmanager– Handles alertnotifications PromQL– Query language foranalyzing metrics
How does Prometheus collect metrics
+
Prometheus uses a pull model to scrape metrics from configuredtargets at specified intervals via HTTP endpoints (/metrics).
What is PromQL, and how is it used
+
PromQL (Prometheus Query Language) is used to query and aggregate time-series data.Example queries: Get CPU usage: rate(node_cpu_seconds_total[5m]) Get memory usage: node_memory_Active_bytes /node_memory_MemTotal_bytes
What is the difference between a counter, gauge, and his togram inPrometheus
+
Counter– Increasesover time, never decreases (e.g., number of requests) Gauge– Can go up ordown (e.g., memory usage, temperature) His togram– Measuresdis tributions (e.g., request duration)
How does Prometheus handle high availability
+
Prometheus doesn’t support clustering, but redundancy can be achieved byrunning multiple Prometheus servers scraping the same targets and using Thanosor Cortex for long-term storage.
How does Prometheus alerting work
+
Alerts are defined in alerting rules, evaluated byPrometheus. If conditions match, alerts are sent to Alertmanager, whichroutes them to notification channels like Slack, Email, PagerDuty, orWebhooks.
How can you scale Prometheus
+
Use federation to scrape data from multiplePrometheus instances Use Thanos or Cortex for long-term storage andHA Shard metrics using different Prometheus instances for differentworkloads
What is the role of an Exporter in Prometheus
+
Exporters expose metrics from services that don’t natively supportPrometheus. Examples: node_exporter(system metrics likeCPU, RAM) cadvis or(containermetrics) blackbox_exporter(HTTP/TCPprobes)
How do you integrate Prometheus with Kubernetes
+
Use kube-prometheus-stack (Helm chart) to deployPrometheus, Grafana, and Alertmanager Service dis covery fetches metrics from pods, nodes, andservices Use custom ServiceMonitors andPodMonitors in Prometheus Operator
What is Grafana, and how does it work
+
Grafana is an open-source analytics and vis ualization toolthat allows querying,alerting, and dashboarding of metrics from multiple sources like Prometheus, InfluxDB,Elasticsearch, and more.
What are the key features of Grafana
+
Multi-data source support (Prometheus, Loki, InfluxDB, MySQL,etc.) Interactive and customizable dashboards Role-based access control Alerting and notifications Plugins for additional functionalities
How does Grafana connect to Prometheus
+
In Grafana, go to Configuration → Data Sources→ Add Data Source Select Prometheus, enter the PrometheusURL, and save the configuration
How can you create an alert in Grafana
+
In a panel, click Edit → Alert → Create AlertRule Set conditions like thresholds and evaluation intervals Configure notification channels (Slack, Email, Webhook,PagerDuty)
What are Annotations in Grafana
+
Annotations are markers added to dashboards to highlight specific events in time,often used for tracking deployments, incidents, or anomalies.
What is Loki in Grafana, and how does it work
+
Loki is a log aggregation system designed by Grafana Labs for indexing and queryinglogs efficiently. It works well with Prometheus and Grafana.
How does Grafana handle authentication and authorization
+
Supports LDAP, OAuth, SAML, and API keys Role-based access control(Viewer, Editor, Admin)
What is the difference between Panels and Dashboards in Grafana
+
Panels– Individualvis ualizations (graphs, tables, heatmaps) Dashboards– A collection ofpanels grouped together
What is the best way to store Grafana dashboards
+
Use JSON exports for saving dashboards Store in Git repositories for versioncontrol Automate deployment using Grafana TerraformProvider
How can you secure Grafana
+
Enable authentication (OAuth, LDAP, SAML) Set up role-based access control(RBAC) Restrict data sources with org-levelaccess Use HTTPS with TLS certificates General q&a
How do you monitor the health of a system inproductionAns: Key metrics : Monitorresource usage (CPU, memory, dis k), response times, error rates, throughput, andcustom application metrics. Uptime checks: Use health checks(e.g., HTTP status codes) to ensure the service is operational.Logs: Continuously collect and review logs for warnings,errors, or unusual behavior.
+
Alerts: Set up alerts based on thresholds to get notified aboutany is sues in real time. Dashboards: Use dashboards to vis ualize the overall health of thesystem in real-time.
What tools have you used for monitoring (e.g., Prometheus, Grafana)
+
Ans: Prometheus: For time-series metrics collection. It scrapesmetrics from targets and provides flexible querying using PromQL. Grafana:For vis ualizing Prometheus metrics through rich dashboards. I often use it to dis play CPU,memory, network utilization, error rates, and custom application metrics. Alertmanager (with Prometheus): To configure alerts based on Prometheus metrics. ELK Stack (Elasticsearch, Logstash,Kibana): For log aggregation, analysis , andvis ualization. Prometheus Operator (for Kubernetes): To monitor Kubernetes clusters.
How do you set up alerts for monitoring systems
+
Ans: Prometheus + Alertmanager: Configure alerts in Prometheusbased on thresholds (e.g., CPU usage > 80%) and route those alerts through Alertmanagerto different channels (e.g., Slack, email). Threshold-based alerts: For example, alerts for high responsetimes, high error rates, or resource exhaustion (like dis k space). Custom alerts: Set up based on application-specific metrics, suchas failed transactions or processing queue length. Kubernetes health checks: Use readiness and liveness probes formicroservices to alert when services are not ready or down. Grafana: Alsoprovides alerting features for any vis ualized metrics. Scenario-Based Questions
If you see gaps in Grafana graphs with Prometheusdata, what could be the is sue
+
Possible reasons: Prometheus scrape interval is too high Data retention is too short Instance down or unreachable
How do you optimize Prometheus storage
+
Reduce scrape intervals where possible Use remote storage solutions (Thanos,Cortex) Set retention policies for old data
What happens if Prometheus goes downHow do you ensure high availability
+
Since Prometheus has no built-in HA, use Thanosfor clustering Run redundant Prometheus instances scraping the sametargets
How do you monitor a microservices architecture with Prometheus andGrafana
+
Use Prometheus Operator for Kubernetesmonitoring Implement service-specific metrics usingPrometheus client libraries Set up Grafana dashboards with relevant servicemetrics
If Prometheus metrics are mis sing from Grafana, how do you troubleshoot
+
Check if the Prometheus server is running Verify that the data source is configured correctly inGrafana Run PromQL queries in Prometheus UI to check for mis singmetrics Ensure correct labels and scrape intervals ELK Stack
Can you explain the ELK stack and how you’veused it
+
Ans: Elasticsearch: A search engine that stores, searches, andanalyzes large volumes of log data. Logstash: A log pipeline toolthat collects logs from differentsources, processes them (e.g., parsing, filtering), and ships them to Elasticsearch. Kibana: A web interface for vis ualizing data stored inElasticsearch. It's useful for creating dashboards to analyze logs, search logs basedon queries, and create vis ualizations like graphs and pie charts. Usage Example: ELK stack aggregate logs from multiplemicroservices. Logs are forwarded from the services to Logstash, where they are filtered andformatted, then sent to Elasticsearch for indexing. Kibana is used to vis ualize logs andcreate dashboards that monitor error rates, request latencies, and service health.
How do you troubleshoot an application using logs
+
Ans: Centralized logging: Collect all application and system logsin a single place (using the ELK stack or similar solutions). Search for errors: Start by searching for any error or exceptionlogs during the timeframe when the is sue occurred. Trace through logs: Follow the logs to trace requests throughvarious services in dis tributed systems, especially by correlating request IDs or user IDs. Examine context: Check logs leading up to the error to understandthe context, such as resource constraints or failed dependencies. Filter by severity: Use log levels (INFO, DEBUG, ERROR) to focuson relevant logs for the is sue. Log formats: Ensure consis tent logging formats (JSON, structuredlogs) to make parsing and searching easier. Splunk
What is Splunk
+
Splunk is a software toolused to search, monitor, and analyze large amounts ofmachine-generated data through a web interface. It collects data from different sources andhelps you analyze it in real time. Key Components of Splunk: Splunk Indexer: Stores andprocesses data. Splunk Search Head: Lets yousearch and vis ualize the data. Splunk Forwarder: Sends data tothe indexer. Splunk Deployment Server: Managessettings for Splunk environments.
What is a Splunk Forwarder
+
A Splunk Forwarder is a lightweight toolthat collects logs from systems and sendsthem to the Splunk Indexer for processing. Types of Splunk Forwarders: Universal Forwarder (UF): A basicagent that sends raw log data. Heavy Forwarder (HF): A strongeragent that can process data before sending it.
What is a Splunk Index
+
A Splunk index is where data is stored in Splunk. It organizes data in time-based"buckets" for quick searches.
How does Splunk handle large volumes of data
+
Splunk uses a time-series indexing system and can dis tribute data across multipleindexers for better performance and scalability. Splunk Free vs. Splunk Enterpris e: Splunk Free: Limitedversion with no clustering or advanced features. Splunk Enterpris e: Fullversion with enterpris e-level features like clustering and dis tributedsearch.
What is a Splunk Search Head
+
The Search Head allows users to search, view, and analyze the data stored inSplunk.
What are Splunk Apps
+
Splunk Apps are pre-configured packages that extend Splunk’s capabilities forspecific tasks, such as security monitoring or infrastructure management.
What is SPL (Search Processing Language)
+
SPL is a language used to search, filter, and analyze data in Splunk. It helpsusers perform complex queries and create vis ualizations.
How to Secure Data in Splunk
+
You can secure data in Splunk with role-based access, encryption for data transferand storage, and authentication methods like LDAP. Splunk Licensing Model: Splunk uses a consumption-based license, where pricing depends on the amount ofdata ingested daily. Different license tiers are available, such as Free, Enterpris e, andCloud. Networking Explain the OSI model layers and theirsignificance. The OSI model has seven layers, each handling a different part of networking: Physical Layer(Cables, Wi-Fi) Data Link Layer(MACaddresses, Switches) Network Layer(IPaddresses, Routing) Transport Layer (TCP, UDP) Session Layer(Maintains connections) Presentation Layer(Data conversion, encryption) Application Layer(HTTP, DNS, FTP)
What is the OSI Model
+
The OSI Model is a 7-layer framework for understanding networkinteractions from physical to application layers. Physical: Transmits raw data over hardware. Data Link: Handles error detection and data framing. Network: Routes data between networks using IP addresses. Transport: Ensures reliable end-to-end communication. Session: Manages sessions between applications. Presentation: Translates data formats, handlesencryption/compression. Application: Provides network services to end-user applications.
What is TCP/IP
+
TCP/IP is a 4-layer communication protocolsuite used for reliabledata transmis sion across networks.
What is DNS, and why is itimportant
+
DNS (Domain Name System) resolves domain names to IP addresses,essential for internet navigation.
What is a firewall
+
A firewall controls network traffic based on security rules,protecting against unauthorized access.
What is NAT (Network AddressTranslation)
+
NAT translates private IP addresses to a public IP, enablinginternet access for devices in private networks. Explain the difference between TCP andUDP. TCP is connection-oriented and reliable, while UDP is connectionlessand faster but less reliable.
What is a VPN, and why is it used inDevOps
+
A VPN (Virtual Private Network) creates secure connections over theinternet, often used for remote server access.
What is Load Balancing
+
Load balancing dis tributes network or application traffic acrossmultiple servers for optimal performance.
What is a Proxy Server
+
A proxy server acts as an intermediary between a client and theinternet, enhancing security and performance.
What is a Subnet Mask
+
A subnet mask defines the network and host portions of an IPaddress, segmenting large networks.
What is Round-Robin DNS and how does it benefitDevOps
+
Round-robin DNS provides a load-balancing mechanis m that helpsdis tribute traffic across multiple servers, enhancing resilience and scalability.
How do Firewall Rules apply toDevOps
+
Firewall rules restrict or allow traffic to and from applications.DevOps teams use them to secure CI/CD environments and limit unnecessary exposure,particularly in production.
What is a Packet Sniffer and its role inDevOps
+
A packet sniffer (e.g., Wireshark, tcpdump) monitors networktraffic, useful for troubleshooting network is sues, monitoring microservicescommunication, or debugging pipeline-related problems.
How does IPsec VPN assis t DevOps
+
IPsec VPNs create secure connections, enabling remote DevOpsengineers to securely access private infrastructure or cloud environments.
What is the difference between Routing and Switchingin DevOps
+
Routing manages traffic between networks, important for multi-cloudor hybrid environments. Switching handles intra-data center communication, ensuringefficient networking within private networks.
Why is Network Topology important inDevOps
+
Understanding network topology helps DevOps teams design resilient,scalable infrastructure and manage traffic flow effectively within clusters.
How does the TCP 3-Way Handshake apply toDevOps
+
The TCP 3-way handshake is crucial for troubleshooting connectionis sues, ensuring services and APis are reliable and reachable in production.
What are CIDR Blocks and how do they assis t inDevOps
+
CIDR blocks are used for network segmentation in cloud setups,improving IP address usage efficiency and security by separating environments like dev,test, and production.
How is Quality of Service (QoS) utilized inDevOps
+
QoS prioritizes network traffic, which is helpful in managingresource-intensive services and ensuring critical applications have sufficientbandwidth.
What role do Network Switches play inDevOps
+
Switches manage local traffic within private networks or datacenters, essential for managing on-premis e services in DevOps workflows.
How are Broadcast Domains relevant toDevOps
+
DevOps engineers must consider broadcast domains when designingnetwork architecture to minimize unnecessary traffic and optimize applicationperformance.
What is Tunneling and how is it used inDevOps
+
Tunneling (e.g., SSH, VPN) enables secure connections between DevOpsenvironments, allowing remote management of cloud resources or linking differentnetworks.
How is EIGRP used in DevOps
+
EIGRP is a routing protocoloften used in legacy environments,helping DevOps teams manage internal routing within private networks.
What is the role of DNS A and CNAME Records inDevOps
+
A and CNAME records manage domain names for applications, helpingdirect traffic to the correct IP addresses or services.
How do Latency and Throughput impactDevOps
+
DevOps teams monitor latency and throughput to assess applicationperformance, especially in dis tributed systems, where network speed significantlyimpacts user experience.
Why is DNS Propagation important forDevOps
+
DevOps teams need to understand DNS propagation to ensure smoothtransitions when updating DNS records and avoid service dis ruptions.
How does ARP Pois oning affectDevOps
+
ARP pois oning is a network security ris k that DevOps teams mustdefend against, implementing security measures to protect networks from suchattacks.
What is a Route Table and how is it used inDevOps
+
Route tables controltraffic flow between subnets in cloudenvironments, essential for managing access to private resources and ensuring efficientnetwork communication.
How does Mesh Topology benefitDevOps
+
Mesh topologies offer redundancy and failover capabilities, crucialfor maintaining service availability in container or Kubernetes networks.
How does DNS Failover supportDevOps
+
DNS failover ensures high availability by automatically redirectingtraffic to backup servers, minimizing downtime if primary servers becomeunavailable.
What is an Access ControlLis t (ACL) inDevOps
+
ACLs restrict access to sensitive resources, commonly used ininfrastructure-as-code (IaC) configurations to ensure secure access management.
What is a Point-to-Point Connection inDevOps
+
Point-to-point connections link private networks in hybridenvironments, often between on-prem infrastructure and cloud environments, to ensuresecure data transfer.
How does Split-Horizon work inDevOps
+
Split-horizon DNS helps prevent routing loops in complex cloudnetworks by managing how DNS records are resolved for internal versus externalqueries.
What is Packet Filtering in DevOps
+
Packet filtering, done by firewalls or cloud security services,enforces security rules and protects applications from unauthorized access.
How do VPN Tunnels aid DevOps
+
VPN tunnels secure connections between on-prem and cloudenvironments, essential for maintaining privacy and security in hybrid cloudsetups.
How are DNS MX Records used inDevOps
+
MX records are vital for email routing, ensuring DevOps teamsproperly configure email services for applications and internal communication.
What is Routing Convergence and its importance inDevOps
+
Routing convergence refers to routers synchronizing their routingtables after a change. In DevOps, this ensures minimal downtime and effective failovermanagement in cloud environments.
What is a DHCP Scope and how does it helpDevOps
+
A DHCP scope automates IP address assignment in private cloud oron-prem environments, simplifying network management and resource allocation.
How do Symmetric and Asymmetric Encryption supportDevOps These encryption methods are crucialfor securing data in transit and at rest. Symmetric encryption is faster, whileasymmetric encryption ensures secure key exchange, both vital in SSH, SSL/TLS,and VPNs.
+
How does Network Latency affectDevOps
+
Low latency is essential for real-time applications, and monitoringtools help DevOps teams identify and troubleshoot latency is sues in pipelines.
What is the role of a Hub in DevOps
+
Hubs are simple networking devices still used in small testenvironments or office networks, providing basic connectivity but lacking the efficiencyof switches.
How does Open Shortest Path First (OSPF) contributeto DevOps
+
OSPF enables dynamic routing in private networks, ensuring faulttolerance and efficient communication, important for DevOps teams managing networkresilience.
How does a DMZ (Demilitarized Zone) apply inDevOps
+
A DMZ is olates public-facing services, providing a security bufferbetween the internet and internal networks, often used in production environments foradditional protection.
What is a Service Level Agreement (SLA) inDevOps
+
SLAs define uptime and performance expectations. DevOps teamsmonitor these metrics to ensure that applications meet agreed-upon servicelevels.
What are Sticky Sessions and how are they used inDevOps
+
Sticky sessions, used in load balancers, ensure that user sessionsare maintained across multiple interactions, essential for stateful applications indis tributed environments.
How does a Subnet Mask work inDevOps
+
Subnetting helps DevOps teams segment networks to is olateenvironments (e.g., dev, test, prod), optimizing traffic flow and security.
How is Multicast used in DevOps
+
Multicast efficiently dis tributes data to multiple receivers, whichis beneficial in environments like Kubernetes clusters where real-time updates arerequired across nodes.
What is Port Mirroring and how does it helpDevOps
+
Port mirroring monitors network traffic for troubleshooting, used inDevOps for performance monitoring and analyzing microservices communications.
How does Zero Trust Architecture relate toDevOps
+
Zero Trust ensures that no one inside or outside the network is trusted by default. This security model is implemented in DevOps to enhance datasecurity and limit the impact of a breach.
What is Subnetting
+
Subnetting is the process of dividing a larger network into smaller,more manageable sub-networks or subnets. It allows for better IP address management,improved network performance, and enhanced security by is olating networksegments.
Why is Subnetting important inDevOps
+
Subnetting helps DevOps teams segment networks to is olate differentenvironments (e.g., development, testing, production) and manage IP addressallocation efficiently. It also enables controlover network traffic and improvessecurity by minimizing broadcast traffic.
What is a Subnet Mask
+
A subnet mask is a 32-bit number that divides an IP address into thenetwork and host portions. It helps identify which part of the IP address refers to thenetwork and which part refers to the individual device. A typical subnet mask looks like255.255.255.0.
What is CIDR (Classless Inter-DomainRouting)
+
CIDR is a method used to allocate IP addresses and route IP packetsmore efficiently. It replaces the traditional class-based IP addressing (Class A, B, C)with a flexible and scalable system. CIDR notation combines the IP address with thesubnet mask in the format IP_address/Prefix_Length , such as 192.168.1.0/24 .
What is the difference between Public and Private IPSubnets
+
Public IP Subnetsareassigned to devices that need to be accessed from the internet (e.g., webservers). Private IP Subnetsareused for internal devices that do not need direct access from the internet,typically within a private network.
How do you calculate the number of subnets and hostsin a given subnet
+
To calculate the number of subnets and hosts: Number of subnets: 2^n (where n is the number of bits borrowed from the host portion). Number of hosts per subnet: (2^h) - 2 (where h is the number of host bits, subtracting 2 accounts for thenetwork address and broadcast address). Example: Given a network 192.168.1.0/24 , if we borrow 2 bits forsubnetting, the new subnet mask will be 255.255.255.192 ( /26 ). Subnets: 2^2 = 4 subnets Hosts per subnet: (2^6) - 2 = 62hosts
What is the difference between Subnet Mask255.255.255.0 and 255.255.255.128
+
255.255.255.0( /24 ) allows for 256 addresses (254 hosts), and is typically usedfor smaller networks. 255.255.255.128( /25 ) creates two subnets from the original /24 , with each subnet having 128 addresses (126 hosts).
How do you subnet a network with the IP 192.168.1.0/24 into 4 equal subnets To divide 192.168.1.0/24 into 4 equal subnets, we need to borrow 2 bits from the hostportion.
+
New subnet mask: 255.255.255.192 ( /26 ) Subnets: 192.168.1.0/26 192.168.1.64/26 192.168.1.128/26 192.168.1.192/26
What are the valid IP address ranges for a subnetwith a 192.168.0.0/28 network
+
Network Address: 192.168.0.0 First Usable IP Address: 192.168.0.1 Last Usable IP Address: 192.168.0.14 Broadcast Address: 192.168.0.15 A /28 subnet allows for 16 IP addresses (14usable).
What is VLSM (Variable Length Subnet Mask) and when is it used inDevOps
+
VLSM allows the use of different subnet masks within the samenetwork, optimizing the allocation of IP addresses based on the needs of each subnet. InDevOps, VLSM helps allocate IPs efficiently, particularly in complex network setups likehybrid cloud architectures or large-scale containerized environments.
What is the difference between a /24 and /30subnet
+
/24(255.255.255.0)provides 256 IP addresses (254 usable hosts). /30(255.255.255.252) provides only 4 IP addresses (2 usable hosts),commonly used for point-to-point links.
How do you handle subnetting in a Kubernetesenvironment
+
In Kubernetes, you may need to define subnets for various componentslike nodes, pods, and services. Using CIDR blocks, you allocate IP ranges for pods andservices while ensuring that network traffic can flow efficiently between thesecomponents. Subnetting is essential for scaling Kubernetes clusters and is olatingenvironments within the same network.
What are Supernets, and how are they different fromSubnets
+
A supernet is a network that encompasses multiple smaller subnets.It’s created by combining several smaller networks into one larger network byreducing the subnet mask size. Supernetting is useful for reducing the number of routingentries in large networks.
What is a Subnetting Table, and how is it useful inDevOps
+
A subnetting table shows different subnet sizes, possible subnets,and the number of hosts available in each subnet. DevOps teams can use this table forplanning network architectures, assigning IP addresses, and managing resourcesefficiently across different environments.
How does CIDR notation improve IP address managementin DevOps
+
CIDR notation allows for more flexible and efficient use of IP addresses,unlike traditional class-based subnetting. It helps DevOps teams allocate IP addressranges that fit specific needs, whether for small environments or large cloudinfrastructures, reducing wastage of IP addresses and improving scalability. Security & Code Quality (Owasp, Sonarqube,Trivy) OWASP, Dependency-Check
How do you integrate security into the DevOpslifecycle (DevSecOps)
+
Ans: Plan: During the planning phase, security requirements andpotential ris ks are identified. Threat modeling and security design reviews are conducted toensure the architecture accounts for security. Code: Developers follow secure coding practices. Implementing codeanalysis tools helps in detecting vulnerabilities early. Code reviews with a focus onsecurity can also prevent vulnerabilities. Build: Automated security tests, such as static analysis , areintegrated into the CI/CD pipeline. This ensures that code vulnerabilities are caught beforethe build is deployed. Test: Vulnerability scanning tools are integrated into testing toidentify potential is sues in the application and infrastructure. Deploy: At deployment, configuration management tools ensure thatsystems are deployed securely. Tools like Infrastructure as Code (IaC) scanners check formis configurations or vulnerabilities in the deployment process. Operate: Continuous monitoring and logging tools like Prometheus,Grafana, and security monitoring tools help detect anomalies, ensuring systems are securedduring operation. Monitor: Automated incident detection and response processes areessential, where alerts can be triggered for unusual activities. What tools have you used to scan for vulnerabilities (e.g., OWASPDependency Ans: OWASP Dependency-Check: This toolis used to scan project dependencies for publicly dis closedvulnerabilities. It checks if the third-party libraries you're using have knownvulnerabilities in the National Vulnerability Database (NVD). Integration: In Jenkins, this can be integrated into the pipelineas a stage where it generates a report on detected vulnerabilities. Example: In your Maven project, you've used owasp-dp-check for scanningdependencies. SonarQube : Used to perform static code analysis . It detects code smells, vulnerabilities, andbugs in code by applying security rules during the build. SonarQube can be integrated with Jenkins and GitHub to ensure that every commit is scanned before merging. Trivy : A comprehensive security toolthat scans container images, filesystems, and Gitrepositories for vulnerabilities. It helps ensure that Docker images are free of knownvulnerabilities before deployment. Aqua Security / Clair : These tools scan container images for vulnerabilities, ensuring that images used inproduction don’t contain insecure or outdated libraries. Snyk : Snyk is a developer-friendly toolthat scans for vulnerabilities in open sourcelibraries and Docker images. It integrates into CI/CD pipelines, allowing developers toremediate vulnerabilities early. Checkmarx : Used for static application security testing (SAST). It scans the source code forvulnerabilities and security weaknesses that could be exploited by attackers. Terraform’s checkov or terrascan : These are security-focused tools for scanning Infrastructure as Code (IaC) filesfor mis configurations and vulnerabilities. By integrating these tools in the CI/CD pipeline, every stage from code developmentto deployment is secured, promoting a "shift-left" approach wherevulnerabilities are addressed early in the lifecycle. Sonarqube
What is SonarQube, and why is it usedAnswer:
+
SonarQube is an open-source platform used to continuously inspectthe code quality of projects by detecting bugs, vulnerabilities, and codesmells. It supports multiple programming languages and integrates well withCI/CD pipelines, enabling teams to improve code quality through staticanalysis . It provides reports on code duplication, test coverage, securityhotspots, and code maintainability.
What are the key features of SonarQubeAnswer:
+
Code Quality Management:Tracks bugs, vulnerabilities, and code smells. Security Hotspot Detection: Detects security ris ks such as SQL injections, cross-sitescripting, etc. Technical Debt Management:Helps in calculating the amount of time required to fix the detectedis sues. CI/CD Integration:Integrates with Jenkins, GitHub Actions, GitLab CI, and others. Custom Quality Profiles:Allows defining coding rules according to the project's specific needs. Multi-Language Support: Supports over 25 programming languages.
How does SonarQube work in a CI/CD pipelineAnswer:
+
SonarQube can be integrated into CI/CD pipelines to ensurecontinuous code quality checks. In Jenkins, for example: SonarQube Scanneris installed as a Jenkins plugin. In the Jenkins pipeline, the source code is analyzed bySonarQube during the build phase. The scanner sends the results back to SonarQube, whichgenerates a report showing code is sues. The pipeline can fail if the quality gate defined inSonarQube is not met
What are SonarQube Quality GatesAnswer:
+
A Quality Gate is a set of conditions that mustbe met for a project to be considered good in terms of code quality.It’s based on metrics such as bugs, vulnerabilities, code coverage,code duplication, etc. The pipeline can be configured to fail if the project does notmeet the defined quality gate conditions, preventing poor-quality code frombeing released.
What is a ‘code smell’ in SonarQubeAnswer:
+
A code smell is a maintainability is sue in thecode that may not necessarily result in bugs or security vulnerabilities butmakes the code harder to read, maintain, or extend. Examples include longmethods, too many parameters in a function, or poor variable namingconventions.
What is the difference between bugs, vulnerabilities, and code smellsin SonarQube
+
Answer: Bugs: is sues in the code that arelikely to cause incorrect or unexpected behavior during execution. Vulnerabilities: Security ris ksthat can make your application susceptible to attacks (e.g., SQL injections,cross-site scripting). Code Smells: Maintainabilityis sues that don't necessarily lead to immediate errors but make the code moredifficult to work with in the long term (e.g., poor variable names, largemethods).
How do you configure SonarQube in JenkinsAnswer:
+
Install the SonarQube Scanner plugin inJenkins. Configure the SonarQube server details inJenkins by adding it under "Manage Jenkins" → "ConfigureSystem". In your Jenkins pipeline or freestyle job, add theSonarQube analysis stage by using the sonar-scanner commandor the SonarQube plugin to analyze your code. Ensure that SonarQube analysis is triggered as part of the build,and configure Quality Gates to stop the pipeline if necessary.
What are SonarQube is sues, and how are they categorizedAnswer:
+
SonarQube is sues are problems found in the code, categorized intothree severity levels: Blocker: is suesthat can cause the program to fail (e.g., bugs, securityvulnerabilities). Critical:Significant problems that could lead to unexpected behavior. Minor: Less severeis sues, often related to coding style or best practices.
How does SonarQube help manage technical debtAnswer:
+
SonarQube calculates technical debt as theestimated time required to fix all code quality is sues (bugs,vulnerabilities, code smells). This helps teams prioritize what should be refactored, fixed, orimproved, and balance this with feature development.
How does SonarQube handle multiple branches in a projectAnswer:
+
SonarQube has a branch analysis feature thatallows you to analyze different branches of your project and track theevolution of code quality in each branch. This is helpful in DevOps pipelines to ensure that newfeature branches or hotfixes meet the same code quality standards asthe main branch.
What is SonarLint, and how does it relate to SonarQubeAnswer:
+
SonarLintis a plugin thatintegrates with IDEs (like IntelliJ IDEA, Eclipse, VSCode) to providereal-time code analysis . It helps developers find and fix is sues in theircode before committing them. SonarLint complements SonarQube by giving developers instantfeedback in their local development environments.
What are some best practices when using SonarQube in a CI/CD pipeline
+
Answer: Automate the quality gate checks:Set up pipelines to fail if the quality gate is not met. Ensure code coverage: Aim for ahigh percentage of test coverage to detect untested and potentially buggycode. Regular analysis : Analyze yourproject code frequently, preferably on every commit or pull request. Use quality profiles: Customizequality profiles to match your team's coding standards. Fix critical is sues first:Prioritize fixing bugs and vulnerabilities over code smells.
What is the SonarQube Scanner, and how is it usedAnswer:
+
The SonarQube Scanner is a toolthat analyzesthe source code and sends the results to the SonarQube server for furtherprocessing. ∙ It can be run as part of a CI/CD pipeline or manually usingthe command line. The basic command is sonar-scanner, and you need toprovide the necessary project and server details in the configuration file(sonar project.properties). Trivy
What is Trivy
+
Answer: Trivy is an open-source vulnerability scanner forcontainers and other artifacts. It is designed to identify vulnerabilities in OS packagesand application dependencies in Docker images, filesystems, and Git repositories. Trivyscans images for known vulnerabilities based on a database that is continuously updated withthe latest CVEs (Common Vulnerabilities and Exposures).
How does Trivy work
+
Answer: Trivy works by performing the following steps: Image Analysis : It analyzes thecontainer image to identify its OS packages and language dependencies. Vulnerability Database Check:Trivy checks the identified packages against its vulnerability database, which is updated regularly with CVEs. 3. Reporting: It generates a report that details the vulnerabilitiesfound, including severity levels, descriptions, and recommendations for remediation.
How can you install Trivy
+
Answer: You can install Trivy by running the following command: brew install aquasecurity/trivy/trivy # For macOS Alternatively, you can use abinary or a Docker image: # Download the binary wget https://github.com/aquasecurity/trivy/releases/latest/download/trivy_$(uname -s)_$(uname -m).tar.gz tar zxvf trivy_$(uname -s)_$(uname -m).tar.gz sudo mv trivy /usr/local/bin/
How can you run a basic scan with Trivy
+
Answer: You can perform a basic scan on a Docker image with thefollowing command: trivy image For example, to scan the latest nginx image, you would use: trivy image nginx:latest
What types of vulnerabilities can Trivy detect
+
Answer: Trivy can detect various types of vulnerabilities,including: OS package vulnerabilities (e.g., Ubuntu, Alpine) Language-specific vulnerabilities (e.g., npm, Python, Ruby) ∙Mis configurations in infrastructure-as-code files Known vulnerabilities in third-party libraries
How can you integrate Trivy into a CI/CD pipeline
+
Answer: Trivy can be integrated into a CI/CD pipeline by adding itas a step in the pipeline configuration. For example, in a Jenkins pipeline, you can add astage to run Trivy scans on your Docker images before deployment. Here's a simpleexample: groovy pipeline { agent any stages { stage('Build') { steps { sh 'docker build -t my-image .' } } stage('Scan') { steps { sh 'trivy image my-image' } } stage('Deploy') { steps { sh 'docker run my-image' } } } }
How can you suppress specific vulnerabilities in Trivy
+
Answer: You can suppress specific vulnerabilities in Trivy bycreating a .trivyignore file, which lis ts the vulnerabilities you want to ignore. Each line inthe file should contain the CVE identifier or the specific vulnerability to be ignored. Example .trivyignore file: CVE-2022-12345 CVE-2021-67890
What are the advantages of using Trivy
+
Answer: The advantages of using Trivy include: Simplicity: Easy to install anduse with minimal setup required. ∙ Speed: Fast scanning of imagesand quick identification of vulnerabilities. Comprehensive: Supports scanningof multiple types of artifacts, including Docker images, file systems, and Gitrepositories. Continuous Updates: Regularlyupdated vulnerability database to ensure accurate detection ofvulnerabilities. Integration: Can be easilyintegrated into CI/CD pipelines for automated security checks.
Can Trivy scan local file systems and Git repositories
+
Answer: Yes, Trivy can scan local file systems and Gitrepositories. To scan a local directory, you can use: trivy fs To scan a Git repository, navigate to the repository and run: trivy repo
What is the difference between Trivy and other vulnerability scanners
+
Answer: Trivy differentiates itself from other vulnerabilityscanners in several ways: Ease of Use: Trivy is known forits straightforward setup and user friendly interface. Comprehensive Coverage: It scansboth OS packages and application dependencies, providing a more holis tic view ofsecurity. Fast Performance: Trivy is designed to be lightweight and quick, allowing for faster scans in CI/CDpipelines. Continuous Updates: Trivyfrequently updates its vulnerability database, ensuring users have the latestinformation on vulnerabilities. Testing Selenium
What is Selenium, and how is it used inDevOps
+
Answer: Selenium is an open-source framework used for automating web applications fortesting purposes. In DevOps, Selenium can be integrated into ContinuousIntegration/Continuous Deployment (CI/CD) pipelines to automate the testing of web applications, ensuring that new code changes do not break exis tingfunctionality. This helps in maintaining the quality of the software while enabling fasterreleases.
What are the different components of SeleniumAnswer:
+
Selenium consis ts of several components: Selenium WebDriver: It provides a programming interface forcreating and executing test scripts in various programming languages. ∙ SeleniumIDE: A browser extension for recording and playback of tests. ∙Selenium Grid: Allows for parallel test execution across different machinesand browsers, enhancing testing speed and efficiency. ∙ Selenium RC (RemoteControl): An older component that has largely been replaced by WebDriver.
How can you integrate Selenium tests into a CI/CD pipelineAnswer:
+
Selenium tests can be integrated into a CI/CD pipeline using tools like Jenkins,GitLab CI, or CircleCI. This can be done by: Setting up a testing framework:Choose a testing framework (e.g., TestNG, JUnit) compatible with Selenium. Creating test scripts: Writeautomated test scripts using Selenium WebDriver. Configuring the pipeline: In theCI/CD tool, create a build step to run the Selenium tests after the application is built and deployed to a test environment. Using Selenium Grid or Docker: UseSelenium Grid for parallel execution or Docker containers to run tests in is olated environments.
What challenges might you face when running Selenium tests in a CI/CDenvironment
+
Answer: Some challenges include: Environment consis tency:Ensuring that the test environment matches the production environment can bedifficult. Browser compatibility:Different browsers may behave differently, leading to inconsis tent test results. Test stability: Flaky tests can lead to unreliable feedback in thepipeline. ∙ Performance: Running tests in parallel may strain resources,leading to longer test execution times if not managed properly.
How do you handle synchronization is sues in Selenium testsAnswer:
+
Synchronization is sues can be addressed by: Implicit Waits: Set a default waiting time for all elements beforethrowing an exception. Explicit Waits: Use WebDriverWait to wait for a specific conditionbefore proceeding, which is more flexible than implicit waits. ∙ FluentWaits: A more advanced wait that allows you to define the polling frequency andignore specific exceptions during the wait period.
Can you explain how you would use Selenium Grid for testingAnswer:
+
Selenium Grid allows you to run tests on multiple machines with different browsersand configurations. To use it: Set up the Hub: Start the SeleniumGrid Hub, which acts as a central point to controlthe tests. Regis ter Nodes: Configure multiplenodes (machines) to regis ter with the hub, specifying the browser and version available on each node. Write Test Scripts: Modify yourSelenium test scripts to point to the Grid Hub, enabling the tests to be executedacross different nodes in parallel. Execute Tests: Run the tests, andthe hub will dis tribute them to the available nodes based on the specified browserand capabilities.
How do you handle exceptions in SeleniumAnswer:
+
Handling exceptions in Selenium can be doneby: Try-Catch Blocks: Wrap your test code in try-catch blocks to catchand handle exceptions like NoSuchElementException, TimeoutException, etc. ∙Logging: Use logging frameworks to log error messages and stack traces foreasier debugging. Screenshots: Capture screenshots on failure usingTakesScreenshot to provide vis ual evidence of what the application looked like at the timeof failure.
How do you ensure the maintainability of Selenium test scriptsAnswer:
+
To ensure maintainability: Use Page Object Model (POM): This design pattern separates thetest logic from the UI element locators, making it easier to update tests when UI changesoccur. Modularization: Break downtests into smaller, reusable methods. ∙ Consis tent Naming Conventions:Use meaningful names for test methods and variables to improve readability. Version Control: Store test scripts in a version controlsystem(e.g., Git) to track changes and collaborate with other team members.
How can you run Selenium tests in headless modeAnswer:
+
Running Selenium tests in headless mode allows tests to run without opening a GUI.This can be useful in CI/CD environments. To run in headless mode, you can set up your browser options. For example, with Chrome: java ChromeOptions options = new ChromeOptions();options.addArguments("--headless"); WebDriver driver = new ChromeDriver(options);
What is the role of Selenium in the testing pyramidAnswer:
+
Selenium fits within the UI testing layer of the testing pyramid.It is primarily used for end-to-end testing of web applications, focusing on userinteractions and validating UI functionality. However, it should complement other types oftesting, such as unit tests (at the base) and integration tests (in the middle), to ensure arobust testing strategy. By using Selenium wis ely within the pyramid, teams can optimizetest coverage and efficiency while reducing flakiness. Repository/Artifact Management Nexus
What is Nexus Repository ManagerAnswer:
+
Nexus Repository Manager is a repository management toolthat helps developersmanage, store, and share their software artifacts. It supports various repository formats,including Maven, npm, NuGet, Docker, and more. Nexus provides a centralized place to managebinaries, enabling better dependency management and efficient artifact storage. It enhancescollaboration among development teams and facilitates CI/CD processes by allowing seamlessintegration with build tools.
What are the main features of Nexus Repository ManagerAnswer:
+
Some key features of Nexus Repository Manager include: Support for Multiple Repository Formats:It supports various formats like Maven, npm, Docker, and others. Proxying Remote Repositories:Itcan proxy remote repositories, allowing caching of dependencies to speed up builds. Artifact Management:Facilitateseasy upload, storage, and retrieval of artifacts. Security and Access Control:Provides fine-grained access controlfor managing user permis sions andsecuring sensitive artifacts. Integration with CI/CD Tools:Itintegrates seamlessly with CI/CD tools like Jenkins, GitLab, and Bamboo, allowingautomated artifact deployment and retrieval. Repository Health Checks:Offersfeatures to monitor repository health and performance.
How do you configure Nexus Repository ManagerAnswer:
+
To configure Nexus Repository Manager: Install Nexus:Download andinstall Nexus Repository Manager from the official website. Access the Web Interface:Afterinstallation, access the Nexus web interface (usually at http://localhost:8081). Create Repositories:In the webinterface, navigate to "Repositories" and create new repositories for yourneeds (hosted, proxy, or group repositories). Set Up Security:Configure userroles and permis sions to manage access control. Configure Proxy Settings (if needed):If using a proxy repository, set up the remote repository URL and cachingoptions. Integrate with Build Tools:Updateyour build tools (like Maven or npm) to point to the Nexus repository fordependencies.
What is the difference between a hosted repository, a proxyrepository, and a group repository in Nexus
+
Answer: Hosted Repository:This is arepository where you can upload and store your own artifacts. It's typicallyused for internal projects or artifacts that are not available in publicrepositories. Proxy Repository:This type caches artifacts from a remote repository, such as Maven Central or npmregis try. When a build toolrequests an artifact, Nexus retrieves it from the remote repository and caches it for future use, speeding up builds and reducing dependency on the internet. Group Repository:This aggregates multiple repositories (both hosted and proxy) into asingle endpoint. It simplifies dependency resolution for users by allowing themto access multiple repositories through one URL.
How do you integrate Nexus Repository Manager with JenkinsAnswer:
+
To integrate Nexus with Jenkins: Install Nexus Plugin:In Jenkins,install the Nexus Artifact Uploader plugin. Configure Jenkins Job:In yourJenkins job configuration, you can specify Nexus Repository Manager settings, suchas repository URL and credentials. Publis h Artifacts:After yourbuild process, use the Nexus plugin to publis h artifacts to Nexus by configuring thepost-build actions. 4. Use Nexus for Dependency Management: Update your build tools (like Maven) inJenkins to resolve dependencies from the Nexus repository.
What are the security features in Nexus Repository ManagerAnswer:
+
Nexus Repository Manager includes several security features: User Authentication:SupportsLDAP, Crowd, and other authentication mechanis ms. Role-Based Access Control:Allowsyou to create roles and assign permis sions to users or groups, controlling who canaccess or modify repositories and artifacts. SSL Support:Can be configured touse HTTPS for secure communication. Audit Logs:Maintains logs of useractions for security and compliance purposes.
How can you monitor the health and performance of Nexus RepositoryManager
+
Answer: You can monitor the health and performance of Nexus Repository Manager by: Using the Nexus UI:The webinterface provides basic statis tics about repository usage and performance metrics. Health Check Reports:Nexus offersbuilt-in health checks for repositories, allowing you to monitor theirstatus. Integration with Monitoring Tools:You can integrate Nexus with external monitoring tools like Prometheus orGrafana to get detailed metrics and alerts based on performance and usagedata. Scripting (Linux, Shell Scripting, Python) Linux
What is a kernelis Linux an OS or akernel
+
Linux is a kernel, not an OS. The kernel is the core part of an OS that manageshardware and system processes.
What is the difference between virtualization and containerization
+
Virtualization:Usesvirtual machines to run multiple OS on one machine Containerization:Uses containers to run multiple apps on a shared OS
Which Linux features help Docker work
+
Namespaces→ Providesis olation Cgroups→Manages resource control OverlayFS→ Usedfor file system
What is a symlink in Linux
+
A symlink, or symbolic link, is a file that points to another file or directory. It acts as a reference to the target file or directory, enabling indirect access. Explain the difference between a process and adaemon in Linux.o A process is a runninginstance of a program, identified by a unique process ID (PID). A daemon is abackground process that runs continuously, often initiated at system boot andperforms specific tasks.
How do you check the free dis k space in Linux
+
Use the df command to dis play dis k space usage of all mounted filesystems, or df -hfor a human-readable output.
What is SSH and how is it useful in a DevOps contexto SSH (Secure
+
Shell) is a cryptographic network protocolfor secure communication between twocomputers. In DevOps, SSH is crucial for remote access to servers, executing commands, andtransferring files securely. Explain the purpose of the grep command in Linux. grep is used to search for specific patterns within files or output. It helpsextract relevant information by matching text based on regular expressions or simplestrings. Describe how you would find all files modified in the last 7 days in adirectory. Use the find command with the -mtime option: find /path/to/directory -mtime -7. Explain the purpose of the chmod command in Linux.o chmod changes file or directory permis sions inLinux. It modifies the access permis sions (read, write, execute) for the owner,group, and others.
What is the role of cron in Linux
+
cron is a time-based job scheduler in Unix-like operating systems. It allows tasks(cron jobs) to be automatically executed at specified times or intervals. DevOps uses cron for scheduling regular maintenance tasks,backups, and automated scripts.
What are runlevels in Linux, and how do they affect systemstartup o Runlevels are modes of operation thatdetermine which services are running in a
+
Linux system. Different runlevels represent different states, like single-usermode, multi-user mode, and reboot/shutdown. With systemd, runlevels have been replaced withtargets like multi-user.target and graphical.target.
How do you secure a Linux server
+
Steps to secure a Linux server include: Regularly updating the system and applying security patches (apt-get update&& apt-get upgrade). Using firewalls like iptables or ufw to restrict access.Enforcing SSH security (dis abling root login, using key based authentication). Installingsecurity tools like fail2ban to block repeated failed login attempts. Monitoring logs withtools like rsyslog and restricting permis sions on sensitive files using chmod and chown.
What is LVM, and why is it useful in DevOps
+
LVM (Logical Volume Manager) allows for flexible dis k management by creatinglogical volumes that can span multiple physical dis ks. It enables dynamic resizing,snapshots, and easier dis k management, which is useful in environments that frequently scalestorage needs, like cloud infrastructure.
How do you monitor system performance in Linux
+
Common tools to monitor system performance include: ▪ top or htop for monitoringCPU, memory, and process usage. ▪ vmstat for system performance stats like memory usage andprocess scheduling. iostat for dis k I/O performance. netstat or ss for network connections and traffic analysis . ▪ sar fromthe sysstat package for comprehensive performance monitoring.
What is the difference between a hard link and asoft link (symlink) o A hard linkis another name for the same file, sharing the same inode number. Ifyou delete one hard link, the file still exis ts as long as other hard linksexis t.
+
A soft link (symlink) points to the path of another file. If thetarget is deleted, the symlink becomes invalid or broken.
How would you troubleshoot a Linux system that is running out ofmemory
+
Steps to troubleshoot memory is sues include: Checking memory usage with free -h or vmstat. Using top or htop to identify memory-hogging processes. ▪ Reviewing swapusage with swapon -s. Checking for memory leaks with ps aux --sort=-%mem or smem. Analyzing the dmesg output for any kernel memory is sues. Explain how you can schedule a one-time task inLinux.o Use the at command to schedule aone-time task. Example: echo "sh backup.sh" | at 02:00 will run the backup.sh script at 2 AM. The atq command can be used to view pendingjobs, and atrm can remove them.
How would you optimize a Linux system for performance o To optimize a Linux system, consider:
+
Dis abling unnecessary services using systemctl or chkconfig. Tuning kernel parameters with sysctl (e.g., networking or memoryparameters). Monitoring and managing dis k I/O using iotop and improving dis kperformance with faster storage (e.g., SSD). ▪ Optimizing the use of swap byadjusting swappiness value (cat /proc/sys/vm/swappiness). Using performance profiling tools like perf to identifybottlenecks.
How would you deal with high CPU usage on a Linux server o Steps to address high CPU usage:
+
Use top or htop to find the processes consuming the most CPU. Use nice or renice to change the priority of processes. Investigate if the load is due to high I/O, memory, or CPU boundtasks. Check system logs (/var/log/syslog or /var/log/messages) for any errorsor is sues. If a specific application or service is the culprit, consider optimizingor tuning it. Explain how Linux file permis sions work (rwx). In Linux, file permis sions are divided into three parts: owner, group, and others.Each part has three types of permis sions: ▪ r (read) - Allows viewing the file'scontents. w (write) - Allows modifying the file's contents. x (execute) - Allows running the file as a program/script. Example: rwxr-xr-- meansthe owner has full permis sions, the group has read and execute, and others have read-onlyaccess.
What is the systemctl command, and why is it important for a DevOpsengineer
+
systemctl is used to controlsystemd, the system and service manager in modernLinux dis tributions. It is critical for managing services (start, stop, restart, status),handling boot targets, and analyzing the system's state. A DevOps engineer needs toknow how to manage services like web servers, databases, and other critical infrastructurecomponents using systemctl.
What is the purpose of iptables in Linux
+
iptables is a command-line firewall utility that allows the system adminis trator toconfigure rules for packet filtering, NAT (Network Address Translation), and routing. InDevOps, iptables is used to secure systems by controlling incoming and outgoing networktraffic based on defined rules.
How would you handle logging in Linux
+
System logs are stored in /var/log/. Common log management tools include: rsyslogor syslog for centralized logging. Using journalctl to view and filter logs on systems usingsystemd. Using log rotation with logrotate to manage large log files by rotating andcompressing them periodically. For DevOps, integrating logs with monitoring tools like ELK(Elasticsearch, Logstash, Kibana) stack or Grafana Loki helps in vis ualizing and analyzing logs in real-time.
What is a kernel panic, and how would youtroubleshoot it o A kernel panic is a systemcrash caused by an unrecoverable error in the kernel. To troubleshoot:
+
Check /var/log/kern.log or use journalctl to analyze kernel messages leading up tothe panic. Use dmesg to view system messages and identify potential hardware or driver is sues. Consider memory testing (memtest86), reviewing recent kernel updates, or checkingsystem hardware.
How do you install a specific version of a package in Linux o On Debian/Ubuntu systems, use apt-cache policy to lis t available versions and sudo apt-get install = . For Red Hat/CentOS systems, use yum--showduplicates lis t to find available versions, and sudo yuminstall - to install it.
+
What is the command to lis t all files and directories in Linux
+
ls → Lis ts files and directories in the current directory. Use ls -l fordetailed information.
How can you check the current working directory in Linux
+
pwd → Prints the current working directory path.
How do you copy a file from one directory to another
+
cp source_file destination_directory → Copies the file to the specifiedlocation.
How do you move or rename a file in Linux
+
mv old_name new_name → Renames a file. file /new/directory/ → Moves a file to another directory.
How do you delete a file and a directory in Linux
+
To delete a file: rm filename To delete an empty directory: rmdir directory_name To delete a directory with contents: rm -rdirectory_name
How do you search for a file in Linux
+
find /path -name "filename" → Searches for a file in the specifiedpath.
How do you search for a word inside files in Linux
+
grep "word" filename → Finds lines containing "word" in afile.
How do you check dis k usage in Linux
+
df -h → Shows dis k usage in a human-readable format.
How do you check memory usage in Linux
+
free -m → Dis plays memory usage in MB.
How do you check running processes in Linux
+
ps aux → Lis ts all running processes. top → Dis plays live system processes and resource usage.
How can you manage software packages in Ubuntu/Debian-based systems
+
Use apt (Advanced Package Tool) commands such as apt-get or apt-cache to install,remove, update, or search for packages. Example: sudo apt-get install . Shell Scripting
What is a shell scriptGive an example of how youmight use it in DevOps.
+
A shell script is a script written for a shell interpreter (like ) to automatetasks. In DevOps, you might use shell scripts for automation tasks such as deployingapplications, managing server configurations, or scheduling backups.
How do you create and run a shell script
+
Create a file: nano script.sh Add script content: #!/bin/ echo "Hello, World!" Give execution permis sion: chmod +x script.sh Run the script: ./script.sh
How do you pass arguments to a shell script
+
#!/bin/ echo "First argument: $1" echo "Second argument: $2" Run the script: ./script.sh arg1 arg2
How do you use a loop in a shell script
+
for i in {1..5} do echo "Iteration $i" done
How do you check the process ID (PID) of a running process
+
ps -ef | grep process_name
How do you kill a running process in Linux
+
Kill by PID: kill Kill by name: pkill process_name Force kill: kill -9
How do you run a process in the background command & → Runs the process in thebackground. jobs → Lis ts background processes.
+
How do you bring a background process to the foreground
+
fg %job_number Run a process in the background If you start a command with &, it runs in the background. Example: sleep 100 & This starts a process that sleeps for 100 seconds in the background. Check background jobs Use the jobs command to see running background jobs: jobs Example output: [1]+ Running sleep 100 & The [1] is the job number. Bring the background job to the foreground Use the fg command with the job number: fg %1 This brings job number 1 to the foreground. Python
What is Python's role in DevOpsAnswer:
+
Python plays a significant role in DevOps due to its simplicity, flexibility, andextensive ecosystem of libraries and frameworks. It is used in automating tasks such as: Infrastructure as Code (IaC):Python works well with tools like Terraform, Ansible, and AWS SDKs. CI/CD Pipelines:Python scriptscan automate testing, deployment, and monitoring processes in Jenkins, GitLab CI,etc. Monitoring and Logging:Pythonlibraries like Prometheus, Grafana APis , and logging frameworks are helpful inDevOps tasks.
How can you use Python in Jenkinspipelines
+
Answer: Answer: Python can be used in Jenkins pipelines to automate steps, such as testing,packaging, or deployment, by calling Python scripts directly within a pipeline. For example,a Jenkinsfile might have: groovy pipeline { agent any stages { stage('Run Python Script') { steps { sh 'python3 script.py' } } } } In this example, the sh command runs a Python script during the build pipeline.
How would you manage environment variables in Python for a DevOpsproject
+
Answer: Environment variables are essential in DevOps for managing sensitive informationlike credentials and configuration values. In Python, you can use the os module to accessenvironment variables: python import os db_url = os.getenv("DATABASE_URL", "default_value") For securely managing environment variables, you can use tools like dotenv orDocker secrets, depending on your infrastructure.
How do you use Python to interact with a Kubernetes cluster
+
Answer: You can use the kubernetes Python client to interact with Kubernetes. Here'san example of lis ting pods in a specific namespace: python from kubernetes import client, config # Load kubeconfig config.load_kube_config() v1 = client.CoreV1Api() pods = v1.lis t_namespaced_pod(namespace="default") for pod in pods.items: print(f"Pod name: {pod.metadata.name}") Python is also useful for writing custom Kubernetes operators or controllers.
How do you use Python to monitor server health in DevOpsAnswer:
+
You can use Python along with libraries like psutil or APis to monitor serverhealth. Here’s an example using psutil to monitor CPU and memory usage: python import psutil # Get CPU usage cpu_usage = psutil.cpu_percent(interval=1) print(f"CPU Usage:{cpu_usage}%") # Get Memory usage memory = psutil.virtual_memory() print(f"Memory Usage:{memory.percent}%") This can be extended to send metrics to monitoring tools like Prometheus orGrafana.
What is the use of the subprocess module in DevOps scripting
+
Answer: The subprocess module allows you to spawn new processes, connect to theirinput/output/error pipes, and retrieve return codes. It’s useful in DevOps forautomating shell commands, deploying code, etc. Example: python import subprocess # Run a shell command result = subprocess.run(["ls", "-l"], capture_output=True,text=True) # Print output print(result.stdout) It allows you to integrate shell command outputs directly into your Python scriptsfor tasks like running Docker commands or interacting with external tools.
How do you handle exceptions in Python scripts for DevOps automationAnswer:
+
Error handling is critical in automation to prevent scripts from crashing and toensure reliable recovery. In Python, try-except blocks are used for handling exceptions: python try: # Code that may rais e an exception result = subprocess.run(["non_exis ting_command"], check=True) exceptsubprocess.CalledProcessError as e: print(f"Error occurred: {e}") You can customize the error messages, log them, or trigger a retry mechanis m ifneeded.
Can you explain how Python works with cloud services in DevOps
+
Answer: Python can interact with cloud platforms (AWS, GCP, Azure) using SDKs. For example,using Boto3 to work with AWS: python import boto3 # Initialize S3 client s3 = boto3.client('s3') # Lis t all buckets buckets = s3.lis t_buckets() for bucket in buckets['Buckets']: print(bucket['Name']) Python helps automate infrastructure provis ioning, deployment, and scaling in thecloud.
How do you use Python for log monitoring in DevOpsAnswer:
+
Python can be used to analyze and monitor logs by reading log files or usingservices like ELK (Elasticsearch, Logstash, Kibana). For instance, reading a log file inPython: python with open('app.log', 'r') as file: for line in file: if "ERROR" in line: print(line) You can integrate this with alerting mechanis ms like Slack or email notificationswhen certain log patterns are detected.
How would you use Python in a Dockerized DevOps environment
+
Answer: Python is often used to write the application logic inside Docker containers ormanage containers using the Docker SDK: python import docker # Initialize Docker client client = docker.from_env() # Pull an image client.images.pull('nginx') # Run a container container = client.containers.run('nginx', detach=True) print(container.id) Python scripts can be included in Docker containers to automate deployment ororchestration tasks. Combined (GitHub Actions, ArgoCD, Kubernetes)
How would you deploy a Kubernetes application usingGitHub Actions and ArgoCD
+
Answer: First, set up a GitHub Actions workflow to push changes toa Git repository that ArgoCD monitors. ArgoCD will automatically sync the changes to theKubernetes cluster based on the desired state in the Git repo. The GitHub Action may alsoinclude steps to lint Kubernetes manifests, run tests, and trigger ArgoCD syncs.
Can you explain the GitOps workflow in Kubernetes using ArgoCD andGitHub Actions
+
Answer: In a GitOpsworkflow: Developers push code or manifest changes to a Git repository. A GitHub Actions workflow can validate the changes and push the updatedKubernetes manifests. ArgoCD monitors the repository and automatically syncs the liveKubernetes environment to match the desired state in Git.
How do you manage secrets for Kubernetes deployments in GitOps usingGitHub Actions and ArgoCD
+
Answer: You can manage secrets using tools like Sealed Secrets,HashiCorp Vault, or Kubernetes Secret management combined with GitHub Actions and ArgoCD.GitHub Actions can store and use secrets, while in Kubernetes, you would use sealed orencrypted secrets to safely commit secrets into the Git repository.

DevOps Shack 200 Jenkins Scenario Based Question and Answer

+
How would you design a Jenkins setup for a large-scaleenterpris e application with multiple teams
+
Design a master-agent architecture where the master handlesscheduling and orchestrating jobs, and agents execute jobs. Use dis tributed builds by configuring Jenkins agents ondifferent machines or containers. Implement folder-based multi-tenancy to is olate pipelines foreach team. Secure the Jenkins setup using role-based access control(RBAC). Example: Team A has access to Folder A with restrictedpipeline vis ibility, while the master node ensures no resource contention.
How can you scale Jenkins to handle high build loads
+
Use Kubernetes-based Jenkins agents that scale dynamicallybased on workload. Implement build queue monitoring and optimize resourceallocation by offloading non-critical jobs to low-priority nodes. Use Jenkins Operations Center (CloudBees CI) for centralizedmanagement of multiple Jenkins instances.
How do you manage plugins in a Jenkins environment to ensure stability
+
Maintain a lis t of approved plugins after testingcompatibility with the Jenkins version. Regularly update plugins in a staging environment beforerolling them into production. Example: While upgrading the Git plugin, test it with yourpipelines in staging to ensure no dis ruption.
How do you design a Jenkins pipeline to support multipleenvironments (e.g., Dev, QA, Prod)
+
Use parameterized pipelines where environment-specificconfigurations (e.g., URLs, credentials) are passed as parameters. Implement environment-specific stages or branch-specificpipelines. Example: A pipeline that promotes a build from Dev to QA andthen to Prod using approval gates between stages.
How can you handle dynamic branch creation in Jenkins pipelines
+
Use multibranch pipelines that automatically detect newbranches in a repository and create pipelines for them. Configure the Jenkinsfile in each branch to define itspipeline behavior.
How do you ensure pipeline resilience in case of intermittent failures
+
Use retry blocks in declarative orscripted pipelines to retry failed stages. Example: Retrying a flaky test stage three times withexponential backoff. Implement conditional steps using catchError to handle failures gracefully.
How do you secure sensitive credentials in Jenkinspipelines
+
Use the Jenkins Credentials plugin to store secrets securely. Access credentials using environment variables or bindings inthe pipeline. Example: Fetch an API key stored in Jenkins credentials using withCredentials in a scripted pipeline.
How do you enforce role-based access control(RBAC) in Jenkins
+
Use the Role-Based Authorization Strategy plugin. Define roles like Admin, Developer, and Viewer, and assignpermis sions for jobs, folders, and builds accordingly.
How do you integrate Jenkins with Docker for buildingand deploying applications
+
Use the Docker plugin or Docker Pipeline plugin. Example: Build a Docker image in the pipeline using docker.build and push it to a container regis try. Run tests in ephemeral Docker containers for consis tent testenvironments.
How do you integrate Jenkins with a Kubernetes cluster for deployments
+
Use the Kubernetes plugin or kubectl commands in the pipeline. Example: Use a Kubernetes pod template with custom containersfor builds, then deploy applications using kubectl apply .
How can you reduce the build time of a Jenkinsjob
+
Use parallel stages to execute independent taskssimultaneously. Example: Parallelize static code analysis , unit tests, andintegration tests. Use build caching mechanis ms like Docker layer caching ordependency caching.
How do you optimize Jenkins for CI/CD pipelines with heavy test loads
+
Split tests into smaller batches and run them in parallel. Use sharding for dis tributed test execution across multipleagents. Example: Divide a 10,000-test suite into 10 shards anddis tribute them across agents.
What would you do if a Jenkins job hangsindefinitely
+
Check the Jenkins build logs for deadlocks or resourcecontention. Restart the agent where the build is stuck, if needed. Example: A job stuck in docker build could indicate Docker daemon is sues; restart the Docker service.
How do you troubleshoot a Jenkins job that keeps failing at the same step
+
Analyze the console output to identify the error message. Check for environmental is sues like mis sing dependencies orincorrect permis sions. Example: A Maven build failing due to repository connectivitymight require checking proxy configurations.
How do you implement manual approval gates in Jenkinspipelines
+
Use the input step in a declarativepipeline. Example: Add an approval step before deploying to production.Only after manual confirmation does the pipeline proceed.
How do you handle blue-green deployments in Jenkins
+
Create separate pipelines for blue and green environments. Route traffic to the new environment after successfuldeployment and health checks. Example: Use AWS Route53 or Kubernetes Ingress to switchtraffic seamlessly.
How do you monitor Jenkins build trends
+
Use the Build His tory and Build Monitor plugins. Example: Vis ualize pass/fail trends over time to identifyflaky tests.
How do you notify teams about build failures
+
Use the Email Extension or Slack Notification plugins. Example: Configure a Slack webhook to notify the #build-alerts channel upon failure.
How do you manage monorepos in Jenkinspipelines
+
Use sparse checkouts to fetch only the required directories. Example: Trigger pipelines based on changes in specificsubdirectories using the dir parameter in Git.
How do you handle merge conflicts in a Jenkins pipeline
+
Use Git pre-merge hooks or resolve conflicts locally and pushthe updated code. Example: A pipeline can fetch both source and target branches,merge them in a temporary branch, and check for conflicts.
How do you trigger a Jenkins pipeline from anotherpipeline
+
Use the build step in a scripted or declarativepipeline to trigger another pipeline. Example: Pipeline A builds the application, and Pipeline B deploysit. Pipeline A calls Pipeline B using build(job:'Pipeline-B', parameters: [string(name: 'version',value: '1.0')]) .
How do you handle shared libraries in Jenkins pipelines
+
Use the Global Shared Libraries feature inJenkins. Example: Create reusable Groovy functions for common tasks (e.g.,linting, packaging) and call them in pipelines using @Library('my-library') .
How do you implement conditional logic in Jenkins pipelines
+
Use when in declarative pipelines or if statements in scripted pipelines. Example: Skip deployment if the branch is not main using when { branch 'main' } .
How do you handle job failures in a Jenkins pipeline
+
Use the catchError block to handle errorsgracefully. Example: catchError { sh 'some-failing-command' } echo 'Handled the failure and proceeding.'
What would you do if a Jenkins master node crashes
+
Restore the master node from backups. Use Jenkins’ thinBackup or a similar plugin for automatedbackups. Example: After restoration, ensure the plugins and configuration aresynchronized.
How do you restart a failed Jenkins pipeline from a specific stage
+
Enable the Restart from Stage feature in theJenkins declarative pipeline. Example: If the Deploy stage fails, restart thepipeline from that stage without re- executing previous stages.
How do you integrate Jenkins with SonarQube for code qualityanalysis
+
Use the SonarQube Scanner plugin. Example: Add a stage in the pipeline to run sonar-scanner and publis h results to the SonarQubeserver.
How do you enforce code coverage thresholds in Jenkins pipelines
+
Use tools like JaCoCo or Cobertura and configure the build to fail ifthresholds are not met. Example: jacoco(execPattern: '**/jacoco.exec', minimumBranchCoverage: '80')
How do you implement parallelis m in Jenkinspipelines
+
Use the parallel directive in declarativepipelines or parallel block in scripted pipelines. Example: Run unit tests , integration tests , and linting inparallel stages.
How do you optimize resource utilization in Jenkins
+
Use lock to manage resource contention. Example: Limit concurrent jobs accessing a shared environment using lock('resourceName') .
How do you run Jenkins jobs in a Docker container
+
Use the docker block in declarativepipelines. Example: agent { docker { image 'node:14' } }
How do you ensure consis tent environments for Jenkins builds
+
Use Docker images to define build environments. Example: Use a prebuilt image with all dependencies pre-installed forfaster builds.
How do you integrate Jenkins with AWS for CI/CD
+
Use the AWS CLI or AWS-specific Jenkins plugins. Example: Deploy an application to S3 using aws s3 cp commands in the pipeline.
How do you configure Jenkins to deploy to Azure Kubernetes Service (AKS)
+
Use kubectl commands with AKS credentialsstored in Jenkins credentials. Example: Deploy manifests using sh 'kubectl apply-f k8s.yaml' .
How do you trigger a Jenkins job when a file changes inGit
+
Use GitHub or Bitbucket webhooks configured with the Jenkinsjob. Example: A webhook triggers the job only for changes in a specificfolder by setting path filters.
How do you schedule periodic builds in Jenkins
+
Use the Build periodically option or cron syntax in pipeline scripts. Example: Schedule a nightly build using H 0 * ** .
How do you audit build logs and job execution inJenkins
+
Enable the Audit Trail plugin to track user actions. Example: View changes made to jobs, builds, and plugins.
How do you implement compliance checks in Jenkins pipelines
+
Integrate with tools like OpenSCAP or custom scripts for compliancevalidation. Example: Validate infrastructure as code (IaC) templates forcompliance before deployment.
How do you manage build artifacts in Jenkins
+
Use the Archive the artifacts post-buildstep. Example: Store JAR files and logs for future reference using archiveArtifacts artifacts: 'build/*.jar' .
How do you publis h artifacts to a repository like Nexus or Artifactory
+
Use Maven/Gradle plugins or REST APis for publis hing. Example: Push aJAR file to Nexus with: sh 'mvn deploy'
How do you notify a team about pipeline status
+
Use Slack or Email plugins for notifications. Example: Notify Slackon success or failure with: slackSend channel: '#builds', message: "Build#${env.BUILD_NUMBER} ${currentBuild.result}"
How do you send detailed build reports via email in Jenkins
+
Use the Email Extension plugin and configure templates for detailedreports. Example: Include build logs and test results in the email.
How do you back up Jenkins configurations
+
Use the thinBackup plugin or manual backup of $JENKINS_HOME . Example: Automate backups nightly and store them in a secure locationlike S3.
How do you recover a Jenkins instance from backup
+
Restore the $JENKINS_HOME directory from thebackup and restart Jenkins. Example: After restoration, validate all jobs and credentials.
How do you implement feature flags in Jenkinspipelines
+
Use environment variables or external tools like LaunchDarkly. Example: A feature flag determines whether to deploy the featurebranch.
How do you integrate Jenkins with a database for testing
+
Spin up a database container or use a preconfigured testdatabase. Example: Use Docker Compose to bring up a MySQL container beforerunning tests.
How do you manage long-running jobs in Jenkins
+
Break them into smaller jobs or stages to allow checkpoints. Example: Use timeout to terminate excessivelylong builds.
What would you do if Jenkins pipelines start failing intermittently
+
Investigate resource constraints, flaky tests, or networkis sues. Example: Monitor agent logs and rebuild affected stages.
How do you manage Jenkins jobs for multiple branches in a monorepo
+
Use multibranch pipelines or branch-specific Jenkinsfiles.
How do you handle cross-team collaboration in Jenkins pipelines
+
Use shared libraries for reusable code and maintain a central Jenkinsgovernance team.
How do you manage Jenkins agents in a dynamic cloudenvironment
+
Use a cloud provider plugin (e.g., Amazon EC2 or Kubernetes). Example: Configure Kubernetes-based agents to dynamically spin uppods based on job demands.
How do you limit the number of concurrent builds for a Jenkins job
+
Use the Throttle Concurrent Builds plugin. Example: Set a limit of two builds per agent to avoid resourcecontention.
How do you optimize Jenkins for large-scale builds with limited hardware
+
Use build labels to dis tribute specific jobs to the rightagents. Example: Assign resource-intensive builds to high-capacity agentswith labels like high_mem .
How do you implement custom notifications in Jenkinspipelines
+
Use a custom script to send notifications via APis . Example: Integrate with Microsoft Teams by using their webhook API tosend custom alerts.
How do you alert stakeholders only on critical build failures
+
Use conditional steps in pipelines to send notifications based onfailure type. Example: Notify stakeholders if the failure occurs in the Deploy stage.
How do you manage dependencies in a Jenkins CI/CDpipeline
+
Use dependency management tools like Maven, Gradle, or npm. Example: Use a package.json or pom.xml file to ensure consis tent dependencies acrossbuilds.
How do you handle dependency conflicts in a Jenkins build
+
Use dependency resolution features of tools like Maven orGradle. Example: Exclude transitive dependencies causing conflicts in the pom.xml .
How do you debug Jenkins pipeline failureseffectively
+
Enable verbose logging for specific stages or commands. Example: Use sh 'set -x &&your-command' for detailed command output.
How do you log custom messages in Jenkins pipelines
+
Use the echo step in declarative or scriptedpipelines. Example: echo "Starting deployment toenvironment: ${env.ENV_NAME}" .
How do you monitor Jenkins server health
+
Use the Monitoring plugin or external toolslike Prometheus and Grafana. Example: Monitor JVM memory, dis k usage, and thread activity usingPrometheus exporters.
How do you set up Jenkins alerts for high resource usage
+
Integrate Jenkins with monitoring tools like Nagios orDatadog. Example: Trigger an alert if CPU usage exceeds 80% duringbuilds.
How do you set up pipelines to work on multiple operatingsystems
+
Use agent labels to target specific platforms (e.g., linux , windows ). Example: Run tests on both Linux and Windows agents using parallelstages.
How do you ensure portability in Jenkins pipelines across environments
+
Use containerized builds with Docker for a consis tent runtime. Example: Build and test the application in the same Dockerimage.
How do you create custom build steps in Jenkins
+
Use the Pipeline Utility Steps plugin or write custom Groovyscripts. Example: Create a step to clean the workspace, fetch dependencies,and run tests.
How do you extend Jenkins functionality with custom plugins
+
Develop a custom Jenkins plugin using the Jenkins Plugin DevelopmentKit (PDK). Example: A plugin to integrate Jenkins with a proprietarydeployment system.
How do you integrate Jenkins with performance testing tools likeJMeter
+
Use the Performance Plugin to parse JMeter results. Example: Trigger a JMeter script, then analyze results withthresholds for build pass/fail criteria.
How do you fail a Jenkins build if performance metrics are below expectations
+
Add a stage to validate performance metrics against predefinedthresholds. Example: Fail the build if response time exceeds 500ms.
How do you trigger a Jenkins job based on an external event (e.g., anAPI call)
+
Use the Jenkins Remote Trigger URL with an API token. Example: Trigger a job using curl -XPOST
/job/ /buildtoken= .
+
How do you schedule a Jenkins job to run only on specific days
+
Use a cron expression in the Build periodically field. Example: Schedule a job for Mondays and Fridays using H H * * 1,5 .
How do you use Jenkins to automate databasemigrations
+
Integrate with tools like Flyway or Liquibase. Example: Add a pipeline stage to run migration scripts beforedeployment.
How do you verify database changes in a Jenkins pipeline
+
Add a test stage to validate schema changes or dataconsis tency. Example: Run SQL queries to ensure migration scripts worked asexpected.
How do you secure Jenkins pipelines from maliciousscripts
+
Use sandboxed Groovy scripts and validate third-partyJenkinsfiles. Example: Use a code review process for external contributions.
How do you protect sensitive information in Jenkins logs
+
Mask sensitive information using the Mask Passwords plugin. Example: API keys are replaced with **** inlogs.
How do you implement versioning in Jenkins pipelines
+
Use build numbers or Git tags for versioning. Example: Generate a version like 1.0.${BUILD_NUMBER} during the build process.
How do you automate release tagging in Jenkins
+
Use git tag commands in the pipeline. Example: Add a post-build step to tag the release and push it to therepository.
How do you fix "agent offline" is sues inJenkins
+
Verify network connectivity, agent logs, and master-agentconfigurations. Example: Check if the agent process has permis sions to connect to themaster.
What would you do if Jenkins fails to fetch code from a Git repository
+
Check Git plugin configurations, repository URL, and accesscredentials. Example: Verify that the SSH key used by Jenkins is valid.
How do you implement canary deployments in Jenkins
+
Deploy a small percentage of traffic to the new version and monitorbefore full rollout. Example: Use a custom script or plugin to automate trafficshifting.
How do you automate rollback in Jenkins pipelines
+
Maintain a record of previous deployments and redeploy the lastsuccessful build. Example: Use a rollback stage that fetchesartifacts of the previous version.
How do you ensure Jenkins pipelines are maintainable
+
Use shared libraries, modular pipelines, and cleardocumentation. Example: Abstract repetitive tasks like linting or packaging intoshared library functions.
How do you handle Jenkins updates in a production environment
+
Test updates in a staging environment before applying them toproduction. Example: Validate that plugins are compatible with the new Jenkinsversion.
How do you handle long-running builds inJenkins
+
Use timeout steps to terminate excessive runtimes. Example: Fail the build if it exceeds 2 hours.
How do you prioritize critical jobs in Jenkins
+
Assign higher priority to critical jobs using the Priority Sorterplugin. Example: Ensure deployment jobs are always queued before non-criticalones.
How do you build and test multiple modules of a monolithicapplication in Jenkins
+
Use a multi-module build system like Maven or Gradle to compile andtest each module independently. Example: Add stages in the pipeline to build, test, and packagemodules sequentially or in parallel.
How do you configure Jenkins to build microservices independently
+
Use separate pipelines for each microservice. Example: Trigger the build of a specific microservice based onchanges in its folder using the path parameter inmultibranch pipelines.
How do you integrate Jenkins with Selenium for UItesting
+
Use the Selenium WebDriver and Jenkins Selenium plugin. Example: Add a stage in the pipeline to run Selenium test scripts ona dedicated test environment.
How do you fail a Jenkins build if tests fail intermittently
+
Use the retry block to re-run flaky tests alimited number of times. Example: Fail the build after three retries if the tests continue tofail.
How do you pass parameters dynamically to a Jenkinspipeline
+
Use parameterized builds and populate parameters dynamically througha script. Example: Use the active choice plugin topopulate a dropdown with values fetched from an API.
How do you create matrix builds in Jenkins
+
Use the Matrix plugin or a declarative pipeline with matrix stages. Example: Test an application on multiple OS and Java versions.
How do you back up and restore Jenkins jobs
+
Back up the $JENKINS_HOME/jobs directory. Example: Automate backups using a cron job or tools like thinBackup .
What steps would you follow to restore Jenkins jobs from backup
+
Stop Jenkins, copy the backed-up job configurations to the $JENKINS_HOME/jobs directory, and restart Jenkins. Example: Verify job configurations and plugin dependenciespost-restoration.
How do you use Jenkins to validate Infrastructure as Code(IaC)
+
Integrate tools like Terraform or CloudFormation with Jenkinspipelines. Example: Add a stage to validate Terraform plans using terraform validate .
How do you implement automated provis ioning using Jenkins
+
Use Jenkins to trigger Terraform or Ansible scripts for provis ioninginfrastructure. Example: Provis ion an AWS EC2 instance and deploy an application onit as part of the pipeline.
How do you test across multiple environments simultaneously inJenkins
+
Use parallel stages in declarative pipelines. Example: Run tests on Dev, QA, and Staging environments inparallel.
How do you configure Jenkins to run parallel builds for multiple branches
+
Use multibranch pipelines to detect and execute builds for allbranches. Example: Each branch builds independently in its pipeline.
How do you securely pass secrets to a Jenkins job
+
Use the Credentials plugin to inject secrets into the pipeline.Example: Use withCredentials to pass a secret API key to ashell script: withCredentials([string(credentialsId: 'api-key', variable:'API_KEY')]) { sh 'curl -H "Authorization: $API_KEY"https://api.example.com' }
How do you audit the usage of credentials in Jenkins
+
Enable auditing through the Audit Trail plugin and monitor credentialusage logs. Example: Identify unauthorized access to sensitivecredentials.
How do you manage a situation where a Jenkins job is stuckindefinitely
+
Identify the is sue by reviewing the build logs and system resourceusage. Example: Terminate the stuck process on the agent and re-trigger thejob.
How do you handle pipeline execution that consumes excessive resources
+
Use resource quotas or throttle settings tolimit resource usage. Example: Assign builds to low-resource agents for non-criticaljobs.
How do you implement multi-cloud deployments usingJenkins
+
Configure multiple cloud credentials and deploy to each providerconditionally. Example: Deploy to AWS, Azure, and GCP using environment-specificdeployment scripts.
How do you monitor Jenkins pipeline performance
+
Use plugins like Build Monitor, Prometheus, or Performance Publis herto track performance metrics. Example: Analyze pipeline execution time trends to optimize slowstages.
How do you generate build trend reports in Jenkins
+
Use the Test Results Analyzer or Dashboard View plugin. Example: Vis ualize the number of passed, failed, and skipped testsover time.
How do you create dynamic stages in a Jenkinspipeline
+
Use Groovy scripting in a scripted pipeline to define stagesdynamically. Example: Loop through a lis t of services and create a build stage foreach.
How do you dynamically load environment configurations in Jenkins
+
Use configuration files stored in a repository or as a Jenkins sharedlibrary. Example: Load environment-specific variables from a JSON file duringthe pipeline execution.
How do you implement build caching in Jenkinspipelines
+
Use tools like Docker cache or Gradle/Maven build caches. Example: Use a shared cache directory for dependencies acrossbuilds.
How do you handle incremental builds in Jenkins
+
Configure the pipeline to build only the modified components usingtools like Git diff. Example: Trigger builds for only the microservices that havechanged.
How do you set up Jenkins for multitenant usage acrossteams
+
Use folders, RBAC, and dedicated agents for each team. Example: Team A and Team B have separate folders with is olatedpipelines and credentials.
How do you handle conflicts when multiple teams use shared Jenkins resources
+
Use the Lockable Resources plugin to serializeaccess to shared resources. Example: Ensure only one team can deploy to the staging environmentat a time.
How do you recover a pipeline that fails due to a transientis sue
+
Use retry blocks to automatically retry thefailed step. Example: Retry a deployment step up to three times if it fails due tonetwork is sues.
How do you resume a pipeline after fixing an error
+
Use the Restart from Stage feature indeclarative pipelines. Example: Resume the pipeline from the Deploy stage after fixing a configuration is sue.
How do you integrate Jenkins with JIRA for is suetracking
+
Use the JIRA plugin to update is sue status automatically after abuild. Example: Transition a JIRA ticket to "In Progress" when thebuild starts.
How do you integrate Jenkins with a service bus or message queue
+
Use custom scripts or plugins to publis h messages to RabbitMQ, Kafka,or AWS SQS. Example: Notify downstream systems after a successful deployment bysending a message to a queue.
How do you use Jenkins to build and test containerizedapplications
+
Use the Docker Pipeline plugin to build and test images. Example: Build a Docker image in one stage and run tests in acontainerized environment in the next stage.
How do you manage container orchestration with Jenkins
+
Use Kubernetes or Docker Compose to orchestrate multi-containerenvironments. Example: Deploy an application and database containers together forintegration tests.
How do you allocate specific agents for certainpipelines
+
Use agent labels in the pipeline configuration. Example: Assign a pipeline to the high-memory agent for resource-intensive builds.
How do you ensure efficient resource utilization across Jenkins agents
+
Use the Load Balancer plugin or Jenkins Cloud Agents for dynamicscaling. Example: Scale down idle agents during off-peak hours.
How do you manage Jenkins configurations acrossenvironments
+
Use tools like Jenkins Configuration as Code (JCasC) or custom Groovyscripts. Example: Use a YAML configuration file to define jobs, credentials, andplugins.
How do you version controlJenkins jobs and pipelines
+
Store pipeline scripts in a Git repository. Example: Use Jenkinsfiles to define pipelines, making them portableand traceable.
How do you implement rolling deployments withJenkins
+
Deploy updates incrementally to a subset of servers or pods. Example: Update 10% of the pods in Kubernetes before proceeding tothe next batch.
How do you automate blue-green deployments in Jenkins
+
Use separate environments for blue and green and switch traffic post-deployment. Example: Use a load balancer to toggle between environments aftersuccessful tests.
How do you integrate Jenkins with API testing tools likePostman
+
Use Newman (Postman CLI) in the pipeline to executecollections. Example: Run newman run collection.json in atest stage.
How do you handle test data for automated testing in Jenkins
+
Use environment variables or configuration files to provide testdata. Example: Pass database credentials as environment variables duringtest execution.
How do you automate release notes generation inJenkins
+
Use a custom script or plugin to fetch Git commit messages or JIRAupdates. Example: Generate release notes from commits tagged with [release] .
How do you implement versioning in a CI/CD pipeline
+
Use Git tags or build numbers to version artifacts. Example: Create a version string like 1.0.${BUILD_NUMBER} for every build.
What steps would you take if Jenkins builds suddenly start failingacross all jobs
+
Check global configurations, credentials, and plugin updates. Example: Investigate whether a recent plugin update causedcompatibility is sues.
How do you handle Jenkins agent dis connections during builds
+
Configure a reconnect strategy or reassign the job to anotheragent. Example: Use a script to auto-restart dis connected agents.
How do you design pipelines to handle varying deploymentstrategies
+
Use parameters to define the deployment type (e.g., rolling,canary). Example: A pipeline prompts the user to select the strategy beforedeployment.
How do you configure pipelines for multiple repository triggers
+
Use a webhook aggregator to trigger the pipeline for changes inmultiple repositories. Example: Trigger a build when changes are made to either the frontendor backend repositories.
How do you ensure compliance with Jenkins pipelines
+
Use tools like SonarQube for code quality checks and enforce policieswith shared libraries. Example: Ensure every pipeline includes a security scan stage.
How do you audit pipeline execution in Jenkins
+
Use the Audit Trail plugin to track changes and executionhis tory. Example: Identify who triggered a job and when.
How do you set up Jenkins for high availability
+
Use a clustered setup with multiple Jenkins masters and sharedstorage. Example: Configure an NFS share for $JENKINS_HOME to ensure consis tency across masters.
What’s your approach to restoring Jenkins from a dis aster
+
Restore configurations and data from backups, then validate pluginsand jobs. Example: Use thinBackup to quickly recover Jenkins data.
How do you implement Jenkins backups for criticalenvironments
+
Use tools like thinBackup or JenkinsConfiguration as Code (JCasC) to back up configurations, jobs, and plugins.Automate the process with cron jobs or scripts. Example: Automate daily backups of the $JENKINS_HOME directory and store them on S3 or a secure location.
What strategies do you recommend for Jenkins dis aster recovery
+
Use a secondary Jenkins instance as a standby master with replicateddata. Example: Periodically sync $JENKINS_HOME between primary and standby instances and use a load balancer forfailover.
How do you handle consis tent build failures caused by flakytests
+
Identify flaky tests using test reports and is olate them intoseparate test suites. Example: Retry only the flaky tests multiple times in a dedicatedpipeline stage.
What would you do if builds fail due to resource exhaustion
+
Optimize resource allocation by reducing the number of concurrentbuilds or increasing system capacity. Example: Add more Jenkins agents or limit concurrent jobs with theThrottle Concurrent Builds plugin.
How do you manage environment-specific variables in Jenkinspipelines
+
Use environment variables defined in the Jenkinsfile or externalconfiguration files. Example: Load environment-specific files based on the selected parameter using: def config = readYaml file: "config/${env.ENVIRONMENT}.yaml"
How do you handle multi-environment deployments in a single pipeline
+
Use declarative pipeline stages with conditional logic for differentenvironments. Example: Deploy to QA, Staging, and Production in sequence withmanual approval gates for Staging and Production.
How do you reduce pipeline execution time for largeapplications
+
Use parallel stages, build caching, and pre-configuredenvironments. Example: Parallelize unit tests, integration tests, and static codeanalysis stages.
How do you identify and fix bottlenecks in Jenkins pipelines
+
Use performance plugins or monitor logs to detect slow stages. Example: Split a long-running build stage into smaller tasks oroptimize resource- intensive scripts.
How do you ensure reproducibility in containerized Jenkinspipelines
+
Use Docker images with all required dependenciespre-installed. Example: Build and test Node.js applications using a custom Docker image: agent { docker { image 'custom-node:14' } }
How do you handle container orchestration in Jenkins pipelines
+
Use Kubernetes plugins or tools like Helm for deploying and managingcontainers. Example: Deploy a Helm chart to Kubernetes as part of thepipeline.
How do you manage shared Jenkins resources across multipleteams
+
Use the Folder and Role-Based Authorization Strategy plugins tois olate team- specific configurations. Example: Each team has a dedicated folder with restricted access totheir jobs and agents.
How do you create reusable components for different team pipelines
+
Use Jenkins Shared Libraries for common functionality like deploymentscripts or notifications. Example: Create a shared library to send Slack notifications: def sendNotification(String message) { slackSend(channel:'#builds', message: message) }
How do you secure sensitive API keys and tokens inJenkins
+
Use the Credentials plugin to securely store and retrieve sensitiveinformation. Example: Use withCredentials to pass an APItoken to a pipeline: withCredentials([string(credentialsId: 'api-token', variable:'TOKEN')]) { sh "curl -H 'Authorization: Bearer ${TOKEN}'https://api.example.com" }
How do you implement secure access controlfor Jenkins users
+
Use the Role-Based Authorization Strategy plugin to define roles andpermis sions. Example: Admins have full access, while developers have job-specificpermis sions.
How do you handle integration testing in Jenkinspipelines
+
Spin up test environments using Docker or Kubernetes for is olatedtesting. Example: Run integration tests against a temporary database containerin a pipeline stage.
How do you automate regression testing in Jenkins
+
Use tools like Selenium or TestNG for regression tests triggeredafter every build. Example: Schedule nightly builds to run a regression testsuite.
How do you customize build notifications inJenkins
+
Use plugins like Email Extension or Slack Notification with customtemplates. Example: Include build duration and commit messages in Slacknotifications.
How do you configure Jenkins to notify specific stakeholders
+
Use the post-build step to send notifications to different recipientsbased on pipeline results. Example: Notify developers on failure and QA on success.
How do you integrate Jenkins with Terraform for IaCautomation
+
Use the Terraform plugin or CLI to apply configurations. Example: Add a stage to validate, plan, and apply Terraformscripts.
How do you integrate Jenkins with Ansible for configuration management
+
Trigger Ansible playbooks from the Jenkins pipeline using the Ansibleplugin or CLI. Example: Use ansiblePlaybook to deployconfigurations to a server.
How do you horizontally scale Jenkins to handle highworkloads
+
Add multiple agents and dis tribute builds using labels or nodeaffinity. Example: Use Kubernetes agents to dynamically scale based on thebuild queue.
How do you optimize Jenkins for a dis tributed build environment
+
Use dis tributed agents with pre-installed dependencies to reducesetup time. Example: Assign resource-intensive jobs to dedicated high-performanceagents.
How do you handle multi-region deployments inJenkins
+
Use separate stages or pipelines for each region. Example: Deploy to US-East and EU-West regions using AWS CLIcommands.
How do you implement zero-downtime deployments in Jenkins
+
Use rolling updates or blue-green deployments to ensureavailability. Example: Gradually replace instances in an auto-scaling group withthe new version.
How do you debug Jenkins pipeline is sues inreal-time
+
Use console logs and debug flags in pipeline steps. Example: Add set -x to shell commands fordetailed debugging.
How do you handle agent dis connect is sues during builds
+
Implement retry logic and configure robust reconnect settings. Example: Auto-restart agents if they dis connect due to resourceconstraints.
How do you implement pipeline-as-code in Jenkins
+
Store Jenkinsfiles in the source code repository forversion-controlled pipelines. Example: Use checkout scm to pull theJenkinsfile from Git.
How do you integrate Jenkins with GitOps workflows
+
Use tools like ArgoCD or Flux in combination with Jenkins forGitOps. Example: Trigger a deployment when changes are committed to a Gitrepository.
How do you implement feature toggles in Jenkinspipelines
+
Use environment variables or configuration files to toggle featuresduring deployment. Example: Use a parameter in the pipeline to enable or dis able a specific feature: if (params.ENABLE_FEATURE_X) { sh 'deploy-feature-x.sh' }
How do you automate multi-branch testing in Jenkins
+
Use multibranch pipelines to automatically detect and run tests onnew branches. Example: Configure branch-specific Jenkinsfiles to define uniquetesting workflows.
How do you manage dependency trees in Jenkins for largeprojects
+
Use build tools like Maven or Gradle with dependency managementfeatures. Example: Trigger dependent builds using the Parameterized Trigger plugin.
How do you build microservices with interdependencies in Jenkins
+
Use a parent pipeline to trigger builds for dependent microservicesin the correct order. Example: Build Service A, then trigger builds for Services B and C,which depend on it.
How do you deploy multiple services using Jenkins inparallel
+
Use the parallel directive in a declarativepipeline. Example: Deploy frontend, backend, and database servicessimultaneously.
How do you sequence dependent service deployments in Jenkins
+
Use pipeline stages with proper dependencies defined. Example: Deploy a database schema before deploying the backendservice.
How do you enforce code scanning in Jenkinspipelines
+
Integrate tools like Snyk, Checkmarx, or OWASPDependency-Check. Example: Add a stage to scan for vulnerabilities in dependencies andfail the build on high-severity is sues.
How do you prevent unauthorized pipeline modifications
+
Use Git repository branch protections and Jenkins accesscontrols. Example: Require pull requests to be reviewed before updatingJenkinsfiles in main .
How do you manage Jenkins jobs for legacy systems
+
Use parameterized freestyle jobs or convert them into pipelines forbetter flexibility. Example: Migrate a job using shell scripts into a scriptedpipeline.
How do you ensure compatibility between Jenkins and legacy build tools
+
Use custom scripts or Dockerized environments that mimic the legacysystem. Example: Run builds in a container with legacy dependenciespre-installed.
How do you store and retrieve pipeline artifacts inJenkins
+
Use the Archive the Artifacts plugin or storeartifacts in a dedicated repository like Nexus or Artifactory. Example: Archive build logs and binaries for debugging andauditing.
How do you handle large artifact storage in Jenkins
+
Use external storage solutions like S3 or Azure Blob Storage. Example: Upload artifacts to an S3 bucket as part of the post-buildstep.
How do you trigger Jenkins builds based on Git tagcreation
+
Configure webhooks to trigger jobs when a tag is created. Example: Trigger a release pipeline for tags matching the pattern v* .
How do you implement Git submodule handling in Jenkins
+
Enable submodule support in the Git plugin configuration. Example: Clone and update submodules automatically during thecheckout process.
How do you implement cross-browser testing inJenkins
+
Use tools like Selenium Grid or BrowserStack for browsercompatibility testing. Example: Run tests across Chrome, Firefox, and Safari in parallelstages.
How do you manage test environments dynamically in Jenkins
+
Use Docker or Kubernetes to spin up test environments during pipelineexecution. Example: Deploy test environments using Helm charts and tear themdown after tests.
How do you customize notifications for specific pipelinestages
+
Use conditional logic to send stage-specific notifications. Example: Notify the QA team only when the test stage fails.
How do you integrate Jenkins with Microsoft Teams for notifications
+
Use a webhook to send notifications to Teams channels. Example: Post pipeline results to a Teams channel using a curl command.
How do you optimize Jenkins pipelines for Docker-basedapplications
+
Use Docker caching and multis tage builds to speed up builds. Example: Build and push Docker images only if code changes aredetected.
How do you deploy containerized applications using Jenkins
+
Use Kubernetes manifests or Docker Compose files in pipelinescripts. Example: Deploy to Kubernetes using kubectlapply .
How do you debug failed Jenkins jobs effectively
+
Analyze logs, enable debug mode, and rerun failing stepslocally. Example: Use sh 'set -x' inpipeline steps to trace shell command execution.
How do you handle intermittent pipeline failures
+
Use retry mechanis ms and investigate logs to identify flakycomponents. Example: Retry a step with a maximum of three attempts: retry(3) { sh 'flaky-command.sh' }
How do you implement blue-green deployments in Jenkinspipelines
+
Use separate environments for blue and green, then switch trafficusing a load balancer. Example: Deploy the new version to the green environment, test it, and redirect traffic from blue to green .
How do you roll back a blue-green deployment
+
Switch traffic back to the stable environment (e.g., blue ) in case of is sues. Example: Update load balancer settings to point to the previousversion.
How do you standardize pipeline templates for multipleprojects
+
Use Jenkins Shared Libraries to define reusable pipelinefunctions. Example: Define a buildAndDeploy function forconsis tent CI/CD across projects.
How do you parameterize pipeline templates for different use cases
+
Use pipeline parameters to customize behavior dynamically. Example: Use a DEPLOY_ENV parameter to specifythe target environment.
How do you monitor long-running builds in Jenkins
+
Use the Build Monitor plugin or integrate with external monitoringtools. Example: Set up alerts for builds exceeding a specificduration.
How do you identify agents with high resource usage
+
Use the Monitoring plugin or analyze system metrics. Example: Identify agents with CPU or memory spikes duringbuilds.
How do you audit Jenkins pipelines for regulatorycompliance
+
Use plugins like Audit Trail to log all pipeline changes andexecutions. Example: Ensure every production deployment is traceable with anaudit log.
How do you enforce compliance checks in Jenkins pipelines
+
Integrate with compliance tools like HashiCorp Sentinel or customscripts. Example: Fail the pipeline if IaC templates do not meet compliancerequirements.
How do you configure Jenkins for auto-scaling in cloudenvironments
+
Use Kubernetes or AWS plugins to dynamically scale agents based onthe build queue. Example: Configure a Kubernetes pod template to spin up agents ondemand.
How do you balance workloads in a dis tributed Jenkins setup
+
Use node labels and assign jobs based on agent capabilities. Example: Assign resource-intensive builds to high-memoryagents.
How do you analyze build success rates in Jenkins
+
Use the Build His tory Metrics plugin or integrate with externalanalytics tools. Example: Generate reports showing success and failure trends overtime.
How do you track pipeline execution times across multiple jobs
+
Use the Pipeline Stage View plugin to vis ualize executiontimes. Example: Identify stages with consis tently high executiontimes.
How do you implement canary deployments in Jenkinspipelines
+
Deploy updates to a small percentage of instances or users first,then gradually increase. Example: Route 5% of traffic to the new version using feature flagsor load balancer rules.
How do you deploy serverless applications using Jenkins
+
Use CLI tools like AWS SAM or Azure Functions Core Tools. Example: Deploy a Lambda function using aws lambdaupdate-function-code .
How do you handle a Jenkins master node running out of dis kspace
+
Clean up old build logs, artifacts, and workspace directories.Example: Use a script to automate periodic cleanup: find $JENKINS_HOME/workspace -type d -mtime +30 -exec rm -rf {}\;
How do you address slow Jenkins startup times
+
Optimize plugins by removing unused ones and upgrading to newerversions. Example: Use the Pipeline Speed/Durability Settings for lightweight pipeline executions.
How do you migrate from Jenkins to a modern CI/CDtool
+
Export pipelines, convert them to the new tool's format, andtest the migrated workflows. Example: Migrate from Jenkins to GitHub Actions using YAML-basedworkflows.
How do you ensure Jenkins pipelines remain future-proof
+
Regularly update plugins, adopt new best practices, and refactoroutdated pipelines. Example: Transition from freestyle jobs to declarative pipelines forbetter maintainability.

Docker

+
Advantages of Kubernetes?
+
It provides automatic scaling, self-healing, load balancing, rolling updates, service discovery, and multi-cloud support. Kubernetes enables highly available and scalable microservice deployments.
Bridge network?
+
Bridge network is the default Docker network for communication between containers on the same host.
Deploy multiple Microservices to Docker?
+
Use separate Dockerfiles and images for each microservice. Use Docker Compose or Kubernetes for managing networking, scaling, and service discovery among multiple containers.
Deploy multiple microservices to Docker?
+
Containerize each service separately and manage them with Docker Compose or Kubernetes. Use service discovery and networking to allow container-to-container communication.
Deploy multiple services across multiple host machines?
+
Use Kubernetes, Docker Swarm, or cloud orchestration tools. They handle load balancing, service discovery, networking, and scaling across multiple hosts.
Deploy Spring Boot JAR to Docker?
+
Create a Dockerfile with a JDK base image and copy the JAR. Expose the required port and run using ENTRYPOINT ["java","-jar","app.jar"]. Build and run using Docker commands.
Deploy Spring Boot Microservice to Docker?
+
Package the microservice as a JAR and create a Dockerfile using a JDK base image. Copy the JAR file and expose the service port. Build the Docker image and run the container using docker run -p : .
Deploy Spring Boot WAR to Docker?
+
Create a Dockerfile using a Tomcat base image and copy the WAR file into the webapps folder. Build the Docker image using docker build -t app . and run the container using docker run -p 8080:8080 app. This deploys the WAR inside a Dockerized Tomcat environment.
DifBet ADD and COPY in Dockerfile?
+
COPY copies local files; ADD can copy local files remote URLs and extract tar archives.
DifBet CMD and ENTRYPOINT in Dockerfile?
+
CMD sets default arguments for a container; ENTRYPOINT configures the container to run as an executable.
DifBet Docker and virtual machines?
+
Docker containers share the host OS kernel and are lightweight; VMs have their own OS and are heavier.
DifBet Docker bind mount and volume?
+
Bind mount maps host directories to containers; volumes are managed by Docker for persistence and portability.
DifBet Docker Compose and Docker Swarm?
+
Docker Compose manages multi-container applications locally; Docker Swarm is a container orchestration tool for clustering and scaling containers.
DifBet Docker image and container?
+
An image is a blueprint; a container is a running instance of that image.
DifBet Docker image layer and container layer?
+
Image layers are read-only; container layer is read-write on top of image layers.
DifBet Docker run and Docker service create?
+
Docker run creates a standalone container; service create deploys containers as a Swarm service with scaling.
DifBet public and private Docker registries?
+
Public registry is accessible to everyone; private registry restricts access to specific users or organizations.
DiffBet Kubernetes and Docker Swarm?
+
Docker Swarm is simpler and tightly integrates with Docker, while Kubernetes is more powerful with advanced scheduling, auto-scaling, and monitoring capabilities. Kubernetes is enterprise-grade, Swarm suits smaller deployments.
Docker attach vs exec?
+
Attach connects to container stdin/stdout; exec runs a command in a running container.
Docker attach?
+
Docker attach connects to a running container’s standard input output and error streams.
Docker best practices?
+
Best practices include small images multi-stage builds volume usage environment variables and secure secrets management.
Docker build ARG?
+
ARG defines a variable that can be passed during build time.
Docker build cache?
+
Build cache stores image layers to speed up subsequent builds.
Docker build?
+
Docker build creates an image from a Dockerfile.
Docker cache?
+
Docker cache stores previously built layers to speed up future builds.
Docker CLI?
+
Docker CLI is a command-line interface to manage Docker images containers networks and volumes.
Docker commit?
+
Docker commit creates a new image from a container’s current state.
Docker compose down?
+
Docker compose down stops and removes containers networks and volumes defined in a Compose file.
Docker compose logs?
+
Docker compose logs shows logs from all services in the Compose application.
Docker compose scale?
+
Compose scale adjusts the number of container instances for a service.
Docker compose up?
+
Docker compose up builds creates and starts containers defined in a Compose file.
Docker Compose?
+
Docker Compose is a tool to define and run multi-container Docker applications using a YAML file.
Docker Compose?
+
Docker Compose is a tool for defining and running multi-container applications using a docker-compose.yml file. It automates container creation, networking, and scaling using simple commands like docker compose up.
Docker config?
+
Docker config stores non-sensitive configuration data for containers in Swarm mode.
Docker container commit?
+
Container commit creates a new image from a running container.
Docker container restart?
+
Container restart stops and starts a container.
Docker container?
+
A Docker container is a lightweight standalone executable package that includes application code and all dependencies.
Docker context use?
+
Context use switches the active Docker environment or endpoint.
Docker context?
+
Docker context allows switching between multiple Docker environments or endpoints.
Docker diff?
+
Diff shows changes made to container filesystem since creation.
Docker Engine?
+
Docker Engine is the core component of Docker that creates and runs Docker containers.
Docker ENTRYPOINT vs CMD combination?
+
ENTRYPOINT defines executable; CMD provides default arguments to ENTRYPOINT.
Docker ENV?
+
ENV sets environment variables inside a container at build or run time.
Docker exec?
+
Docker exec runs a command inside a running container.
Docker EXPOSE?
+
EXPOSE documents the port on which the container listens.
Docker health check?
+
Health check monitors container status and defines conditions for healthy or unhealthy states.
Docker healthcheck command?
+
Healthcheck defines a command in Dockerfile to monitor container status.
Docker Hub?
+
Docker Hub is a cloud-based registry to store and share Docker images.
Docker image prune?
+
Image prune removes dangling (unused) images.
Docker image?
+
A Docker image is a read-only template used to create Docker containers containing the application and its dependencies.
Docker inspect format?
+
Inspect format uses Go templates to extract specific JSON fields.
Docker inspect?
+
Docker inspect returns detailed JSON information about containers images or networks.
Docker kill vs stop?
+
Kill forces container termination; stop gracefully stops and allows cleanup.
Docker layer?
+
Docker layer is a filesystem layer created for each Dockerfile instruction during image build.
Docker load vs import?
+
Load imports an image from a tar file; import creates an image from a filesystem archive.
Docker login?
+
Docker login authenticates a user with a Docker registry.
Docker logout?
+
Docker logout removes saved credentials for a Docker registry.
Docker logs -f?
+
Logs -f streams container logs in real-time.
Docker logs?
+
Docker logs display the standard output and error of a running or stopped container.
Docker multi-stage build?
+
Multi-stage build reduces image size by using multiple FROM statements in a Dockerfile for building and final image creation.
Docker network create?
+
Docker network create creates a new Docker network.
Docker network inspect?
+
Docker network inspect shows detailed information about a network and connected containers.
Docker network ls?
+
Docker network ls lists all networks on the Docker host.
Docker network types?
+
Types include bridge host overlay macvlan and none.
Docker network?
+
Docker network allows containers to communicate with each other or with external networks.
Docker node?
+
Docker node is a Swarm cluster member (manager or worker) managed by Docker.
Docker overlay network in Swarm?
+
Overlay network allows services across multiple nodes to communicate securely.
Docker ports vs EXPOSE?
+
EXPOSE only documents; ports (-p) maps container ports to host.
Docker prune -a?
+
Docker prune -a removes all stopped containers unused networks images and optionally volumes.
Docker prune containers?
+
Prune containers removes stopped containers to free space.
Docker prune volume?
+
Docker prune volume removes unused volumes.
Docker prune?
+
Docker prune removes unused containers networks volumes or images.
Docker ps -a?
+
Docker ps -a lists all containers including stopped ones.
Docker ps?
+
Docker ps lists running containers and their details.
Docker pull?
+
Docker pull downloads a Docker image from a registry.
Docker push?
+
Docker push uploads a Docker image to a registry.
Docker registry?
+
Docker registry stores Docker images; Docker Hub is a public registry while private registries are also supported.
Docker replica?
+
Replica is an instance of a service running in a Swarm cluster.
Docker restart always?
+
Restart always ensures the container restarts automatically if it stops.
Docker restart policy?
+
Restart policy defines when a container should restart e.g. always unless-stopped on-failure.
Docker rm?
+
Docker rm removes a stopped container.
Docker rmi?
+
Docker rmi removes a Docker image from the local system.
Docker save vs export?
+
Save exports an image as a tar file; export exports a container filesystem.
Docker secret vs config?
+
Secret stores sensitive data; config stores non-sensitive configuration data.
Docker secrets create?
+
Docker secrets create adds a secret to the Swarm cluster.
Docker secrets inspect?
+
Docker secrets inspect shows details of a specific secret.
Docker secrets ls?
+
Docker secrets ls lists all secrets in the Swarm cluster.
Docker secrets?
+
Docker secrets securely store sensitive data like passwords or API keys for use in containers.
Docker security?
+
Docker security includes using least privilege scanning images securing secrets and isolating containers.
Docker service update?
+
Docker service update updates a running service in a Swarm cluster.
Docker service?
+
Docker service runs a container or group of containers across a Swarm cluster with scaling and update capabilities.
Docker stack?
+
Docker stack deploys a group of services defined in a Compose file to a Swarm cluster.
Docker Stack?
+
Docker Stack is used in Docker Swarm to deploy and manage multi-service applications defined in a Compose file. It supports scaling, rolling updates, and distributed deployment across nodes.
Docker stats?
+
Docker stats shows real-time resource usage (CPU memory network) for containers.
Docker stop and Docker kill?
+
Docker stop gracefully stops a container; Docker kill forces termination.
Docker swarm init?
+
Docker swarm init initializes a Docker host as a Swarm manager.
Docker swarm join?
+
Docker swarm join adds a node to a Swarm cluster.
Docker Swarm?
+
Docker Swarm is a native clustering and orchestration tool for Docker allowing management of multiple Docker hosts.
Docker system df?
+
Docker system df shows disk usage of images containers volumes and build cache.
Docker tag?
+
Docker tag assigns a new name or version to an image.
Docker top vs exec?
+
Top shows running processes; exec runs a new command in container.
Docker top?
+
Docker top shows running processes inside a container.
Docker USER?
+
USER sets the username or UID to run the container process.
Docker volume create?
+
Docker volume create creates a new persistent volume for containers.
Docker volume ls?
+
Docker volume ls lists all Docker volumes on the host.
Docker volume?
+
A Docker volume is a persistent storage mechanism to store data outside the container filesystem.
Docker WORKDIR?
+
WORKDIR sets the working directory for container commands.
Docker?
+
Docker is a platform that allows developers to build ship and run applications in lightweight portable containers.
Docker?
+
Docker is a containerization platform that packages applications and dependencies into lightweight, portable containers. It ensures consistent environments across development, testing, and production. Docker improves deployment speed, scalability, and resource utilization.
Dockerfile used for?
+
A Dockerfile contains a set of instructions to build a Docker image automatically. It defines the base image, application code, dependencies, environment variables, and commands to run the app inside a container.
Dockerfile?
+
A Dockerfile is a text file containing instructions to build a Docker image.
Kubernetes Namespaces?
+
Namespaces logically isolate clusters into multiple virtual environments. They help manage resources, security policies, and team separation in large applications.
Kubernetes?
+
Kubernetes is an open-source container orchestration system for automating deployment, scaling, and management of containerized applications across clusters.
Node in Kubernetes?
+
A node is a physical or virtual machine in the Kubernetes cluster that runs application workloads. It contains kubelet, container runtime, and networking components.
Overlay network?
+
Overlay network connects containers across multiple Docker hosts in a Swarm cluster.
Pod in Kubernetes?
+
A pod is the smallest deployable unit containing one or more containers sharing storage, networking, and lifecycle. Kubernetes schedules and manages pods rather than individual containers.
Rolling update in Docker?
+
Rolling update updates service replicas gradually to avoid downtime.
Scenarios where Java developers use Docker?
+
Docker is used for creating consistent dev environments, Microservices deployment, CI/CD pipelines, testing distributed systems, isolating services, and running different Java versions without conflicts.
What is Docker?
+

® Docker is a open-source platform that allows you to build ,ship ,and run applications inside containers.

· A container is a lightweight, standalone and portable environment that include everything your application needs to run --> like code, runtime, librraies and dependencies.

· With docker, developers can ensure their applications runs the same way everywhere – whether on their laptop, a testing server or in the cloud.

· It solves the problem of “It works on my machine” ,

because containers carry all dependencies with them.

In Short

· Docker = Platform to create and manage containers.

· Container = Small, portable environment to run applications with all dependencies.

Restart Policy
+

In Docker, when we talk about policy, it usually refers to the restart policies of containers.

These policies define what should happen to a container when it stops, crashes, or when Docker itself restarts.

Types of restart Policy

1. No (default)

2. Always

3. On- failure

4. Unless- stopped

Always policy:-
+

If container manual is off and always policy is set to on then container will only start when "docker daemon restart"

Command-- >

docker container run –d –-restart always httpd

Always Policy

Unless-stopped:-
+

If a container is down due to an error and has an Unless-stopped policy, it will only restart when you "restart docker daemon"

Command -- >

docker container run –d –-restart unless-stopped httpd

Unless-stopped policy

On-failure:-
+

When a container shuts down due to an error and has a no-failure policy, the container will restart itself.

Command -- >

Docker container run –d –-restart on-failure httpd

On –failure police

Max Retry in on-failure Policy (English)
+

When you use the on-failure restart policy in Docker, you can set a maximum retry count.

· This tells Docker how many times it should try to restart a failed container before giving up.

· If the container keeps failing and reaches the retry limit, Docker will stop trying.

docker run -d --restart=on-failure:5 myapp

Port Mapping
+

® Every Docker container has its own network namespace (like a mini-computer).

® By default, services inside a container are not accessible from outside the host machine.

® Port Mapping is the process of exposing a container’s internal port to the host machine’s port so that external users can access it.

It uses the -p or --publish option:-

docker container run –d –p <host port>:<container port> httpd

Networking
+

Docker networking is how containers communicate with each other, with the host machine, and with the outside world (internet).

When Docker is installed, it creates some default networks. Containers can be attached to these networks depending on how you want them to communicate.

Default Docker Networks

1. bridge (default)

a. If you run a container without specifying a network, it connects to the bridge network.

b. Containers on the same bridge network can communicate using IP addresses.

c. You can also create your own user-defined bridge for name-based communication.

2. host

a. Removes the isolation between the container and the host network.

b. Container uses the host’s network directly.

c. Example: If container exposes port 80, it will be directly available on host port 80.

3. none

a. Completely isolates the container from all networks.

b. No internet, no container-to-container communication.

How Containers Communicate

· Container ↔ Container (same bridge network) → via

container name or IP.

· Container ↔ Host → via port mapping ( -p hostPort:containerPort ).

· Container ↔ Internet → via NAT (Network Address

Translation) on the host.

Command --> docker network ls

Create a network-- >

docker network create – -driver bridge network name

Docker network create – -driver bridge – -subnet 1G8.68.0.0/16 mynetwork

Craete a container in our custom network

docker container inspect container name

Volume
+

® By default, anything you save inside a container is temporary.

® If the container is deleted, all data inside it is lost.

® Volumes are Docker’s way to store data permanently

(persistent storage).

A Docker Volume is a storage location outside the container’s

filesystem but managed by Docker.

This way, data remains safe even if the container is removed or recreated.

Why Use Volumes?

1. Data Persistence → Data won’t be lost if the container is

deleted.

2. Sharing Data → Multiple containers can share the same

volume.

3. Performance → Better than bind mounts for production

workloads.

Types of volume:-

1 Bind volume

2 Local Mount/ Volume mount

Bind Mount:-

® A Bind Mount directly connects a host machine’s directory/file to a container’s directory.

® This means whatever changes you make inside the container will reflect on the host, and vice versa.

® It’s different from a Volume because:

® Volumes are managed by Docker (stored in

/var/lib/assets/img/docker/volumes/... )

® Bind mounts are managed by you (stored anywhere on your host).

Command :-

docker container run –d –p 80:80 –v

/directory_name:/usr/local/apache2/htdocs

Local mount Create a Volume
+

Command:-

docker volume create my- vol

Docker Image
+

® A Docker Image is a blueprint (template) used to create Docker containers.

® It contains:

® Application code

® Dependencies (libraries, packages)

® Configuration files

® Environment settings

® You can think of an image like a snapshot or read-only template.

® When you run an image → it becomes a container.

Docker pull nginx

Types of Image Creation

1 Commit Method

2 Dockerfile Method

Commit mehtod:-

® The docker commit command is used to create a new image from an existing container.

® This is helpful when you:

® Run a container

® Make changes inside it (install packages, edit files, configure apps)

® Then save those changes as a new Docker Image.

Push the Image on your DockerHub Command :-

docker login –u username(dockerHub username)

Create a Image for Docker Commit Method Commands :-
+

Vim index.html

--> this is my commit method

· Docker container run –d – name web httpd

· Docker container cp index.html web:/usr/local/apache2/htdocs

· Docker container commit –a “grras” web team:latest( team=image name )

Create a new container from custom image and hit the IP on browser and show the contant

Push the Image on DockerHub Command :-

docker image tag team:latest username/team

docker image push username/team:latest

Dockerfile
+

® A Dockerfile is a text file that contains a set of instructions to build a Docker Image.

® Instead of making changes in a container and committing (using docker commit ), we write instructions

in a Dockerfile → so the image can be

built automatically and repeatedly.

® It ensures consistency (same image every time you build).

Common Instructions in Dockerfile:

· FROM → Base image (e.g., ubuntu, alpine,

nginx)

· RUN → Run commands (install packages)

· COPY → Copy files from host to image

· WORKDIR → Set working directory

· CMD → Default command to run when

container starts

· EXPOSE → Inform which port container

will use

mkdir docker

cd docker

Vim index.html

Docker image build . tag web Docker container run –d web:test

Complete Docker
+

.NET Core

+
.NET (5/6/7/8+)?
+
A unified, cross-platform, high-performance framework for building desktop, web, mobile, cloud, and IoT apps.
.NET Core?
+
A fast, modular, cross-platform, open-source framework for building modern cloud and web apps.
.NET Framework?
+
A Windows-only framework with CLR and rich libraries for building desktop and legacy ASP.NET apps.
.NET Platform Standards?
+
pecifications that ensure shared APIs and cross-platform compatibility across .NET runtimes.
.NET?
+
A software framework with libraries, runtime, and tools for building applications.
@Html.AntiForgeryToken()?
+
Token used to prevent CSRF attacks.
3 important segments for routing?
+
Controller name, Action name, and optional Parameter (id).
3-tier focuses on application architecture.
+
MVC focuses on UI interaction and request handling.
ABAC?
+
Attribute-Based Access Control.
Abstract Class vs Interface?
+
Abstract class can have implementation; interface cannot.
Abstraction?
+
Hiding complex implementation details.
Access Control Matrix?
+
Table mapping users/roles to permissions.
Access Review?
+
Periodic review of user permissions.
Access Token Audience?
+
Specifies which API the token is intended for.
Access Token Leakage?
+
Unauthorized party obtains a token.
Access Token?
+
Token used to access protected APIs.
Accessing HttpContext
+
Access HttpContext via dependency injection using IHttpContextAccessor; controllers/middleware access directly, services via IHttpContextAccessor.HttpContext.
ACL?
+
Access Control List defining user permissions for a resource.
Action Filter?
+
Code executed before or after controller action execution.
Action Filters?
+
Attributes executed before/after controller actions.
Action Method?
+
A public method inside controller handling client requests.
Action Selector?
+
Attributes like [HttpGet], [HttpPost], [Route].
ActionInvoker?
+
Executes selected MVC action method.
ActionName attribute?
+
Maps method to a different public action name.
ActionResult is a base type that can return various results.
+
ViewResult specifically returns a View response.
ActionResult?
+
Base type for all responses returned from action methods. A return type in MVC representing HTTP responses returned from controller actions.
AD Group?
+
A collection of users with shared permissions.
ADO.NET?
+
Data access framework for relational databases.
AdRotator Control:
+
Displays banner ads from an XML file randomly or by weight, supporting URL redirection for dynamic ad management.
Advantages of ASP.NET?
+
High-performance, secure server-side framework supporting WebForms, MVC, Web API, caching, authentication, and rapid development.
Advantages of MVC:
+
Provides testability, clean separation, faster development, reusable code, and SEO-friendly URLs.
Ajax in ASP.NET?
+
Enables asynchronous browser-server communication to update page parts without full reload, using controls like UpdatePanel and ScriptManager.
AJAX in MVC?
+
Asynchronous calls to server without full page reload.
AllowAnonymous?
+
Attribute used to skip authorization.
ANCM?
+
ASP.NET Core Module enables hosting .NET Core under IIS reverse proxy.
Anti-forgery middleware?
+
Middleware enforcing CSRF protection in .NET Core.
AntiForgeryToken validation attribute?
+
[ValidateAntiForgeryToken] ensures request includes valid token.
AntiXSS?
+
Technique for preventing cross-site scripting.
AOT Compilation?
+
Compiles .NET apps to native code for faster startup and lower memory use.
API Documentation?
+
Swagger/OpenAPI.
API Gateway?
+
Single entry point for routing, auth, rate limiting.
API Key Authentication?
+
Custom header with an API key.
API Key Authorization?
+
Simple authorization using an API key header.
API Versioning Methods?
+
URL, Header, Query, Media Type.
API Versioning?
+
Supporting multiple versions of an API using routes, headers, or query params.
API Versioning?
+
Supporting multiple API versions to maintain backward compatibility.
ApiController attribute do?
+
Enables auto-validation and improved routing.
App Domain Concept in ASP.NET?
+
AppDomain isolates applications within a web server. It provides security, reliability, and memory isolation. Each website runs in its own AppDomain. If one crashes, others remain unaffected.
app.Run vs app.Use?
+
app.Use() continues the pipeline; app.Run() terminates it.
app.UseDeveloperExceptionPage()?
+
Displays detailed errors in development mode.
app.UseExceptionHandler()?
+
Middleware for centralized exception handling.
AppDomain?
+
Isolated region where a .NET application runs.
Application Insights?
+
Azure monitoring platform for performance and telemetry.
Application Model
+
The application model determines how controllers, actions, and routing behave. It helps apply conventions and filters across the application.
Application Pool in IIS?
+
Worker process isolation unit.
appsettings.json used for?
+
Stores configuration values like connection strings, logging, and custom settings.
appsettings.json?
+
Primary configuration file in ASP.NET Core.
appsettings.json?
+
Stores key/value settings for the application, commonly used in ASP.NET Core MVC.
Area in MVC?
+
Module-level grouping for large applications (Admin, Customer, User).
ASP.NET Core host apps without IIS?
+
Yes, it can run standalone using Kestrel.
ASP.NET Core run in Docker?
+
Yes, it supports containerization with official runtime and SDK images.
ASP.NET Core serve static files?
+
By enabling app.UseStaticFiles() and placing files in wwwroot.
ASP.NET Core?
+
A cross-platform, high-performance web framework for APIs, MVC, and real-time apps.
ASP.NET Core?
+
A cross-platform, high-performance web framework for building modern cloud-based applications.
ASP.NET filters run at the end?
+
Exception Filters are executed last. They handle unhandled errors during action or result processing. Used for logging and custom error pages. Ensures graceful error handling.
ASP.NET Identity?
+
Framework for user management, roles, claims.
ASP.NET MVC?
+
Model–View–Controller pattern for web applications.
ASP.NET page life cycle?
+
ASP.NET page life cycle defines stages a page goes through when processing. Key stages: Page Request, Initialization, View State Load, Postback Event Handling, Rendering, and Unload. Events allow custom logic execution at each phase. It controls how data is processed and displayed.
ASP.NET Web Forms?
+
Event-driven web framework using drag-and-drop UI.
ASP.NET?
+
A server-side .NET framework for building dynamic websites, APIs, and enterprise web apps.
ASP.NET?
+
Microsoft’s web framework for building dynamic, high-performance web apps with MVC, Web API, and WebForms.
Assemblies?
+
Compiled .NET code units containing code, metadata, and manifests (DLL or EXE) for deploying
Assembly defining MVC:
+
MVC components are defined in System.Web.Mvc.dll.
Assign an alias name for ASP.NET Web API Action?
+
You can use the [ActionName] attribute to give an alias to an action. Example: [ActionName("GetStudentInfo")]. This helps when method names and route names need to differ. It's useful for versioning and friendly URLs.
async action method?
+
Action using async/await for non-blocking operations.
Async operations in EF Core?
+
Perform database tasks asynchronously to improve responsiveness and scalability.Use ToListAsync(), FirstAsync(), etc.
Async programming?
+
Non-blocking programming using async/await.
async/await?
+
Asynchronous programming model avoiding blocking operations.
async/await?
+
Keywords enabling non-blocking asynchronous code execution.
Attribute Routing
+
Defines routes directly on controllers and actions using attributes like [Route("api/[controller]")].
Attribute-based routing?
+
Routing using attributes above controller/action.
Attributes?
+
Metadata annotations used for declaring properties about code.
authentication and authorization in ASP.NET?
+
Authentication verifies user identity (who they are). Authorization defines access permissions for authenticated users. ASP.NET supports built-in security mechanisms. Both ensure secure application access.
Authentication in ASP.NET Core?
+
Process of verifying user identity.
Authentication modes in ASP.NET for security?
+
ASP.NET supports Windows, Forms, Passport, and Anonymous authentication. Forms authentication is common for web apps. Security is configured in Web.config. Each mode provides a method to validate users.
Authentication vs Authorization?
+
Authentication verifies identity; authorization verifies access rights.
Authentication?
+
Identifying the user.
authentication?
+
Process of verifying user identity.
Authentication?
+
Verifying user identity.
Authorization Audit Trail?
+
Logs that track authorization decisions.
Authorization Cache?
+
Caching authorization decisions for performance.
Authorization Drift?
+
Outdated or incorrectly configured permissions.
Authorization Filter?
+
Executes before controller actions to enforce permissions.
Authorization Handler?
+
Custom logic to evaluate authorization requirements.
Authorization Pipeline?
+
Sequence of steps evaluating user access.
Authorization Policy?
+
Named group of requirements.
Authorization Requirement?
+
Represents a condition to fulfill authorization.
Authorization Server?
+
Server that issues access tokens.
Authorization types?
+
Role-based, Claim-based, Policy-based, Resource-based.
Authorization?
+
Authorization determines what a user is allowed to access after authentication.
authorization?
+
Process of verifying user access rights based on roles or claims.
Authorization?
+
Verifies if authenticated user has access rights.
Authorization?
+
Checking user access rights after authentication.
Authorize attribute?
+
Enforces authorization using roles, policies, or claims.
AutoMapper?
+
Object mapping library.
AutoMapper?
+
Library for mapping objects automatically.
Azure App Service?
+
Cloud hosting platform for ASP.NET Core applications.
Azure Key Vault?
+
Secure storage for secrets, keys, and credentials.
B2B Authorization?
+
Authorization in multi-tenant business apps.
B2C Authorization?
+
Authorization in consumer-facing apps.
Backchannel Communication?
+
Secure server-server communication for token exchange.
Background worker coding?
+
Inherit from BackgroundService.
BackgroundService class?
+
Runs long-lived background tasks in .NET apps, e.g., for messaging or monitoring.
Basic Authentication?
+
Authentication using Base64 encoded username and password.
Basic Authorization?
+
Credentials sent as Base64 encoded username:password.
Bearer Authentication?
+
Token-based authentication mechanism where tokens are sent in request headers.
Bearer Token?
+
Authorization token sent in Authorization header.
beforeFilter(), beforeRender(), afterFilter():
+
beforeFilter() runs before action, beforeRender() runs before view rendering, and afterFilter() runs after the response.
Benefits of ASP.NET Core?
+
Cross-platform, Cloud-ready, container friendly, modular, and fast runtime.
Benefits of using MVC:
+
MVC gives separation of concerns, supports testability, clean URLs, maintainability, and scalability.
Blazor Server and WebAssembly?
+
Server-side rendering vs client-side execution in browser.
Blazor?
+
Framework for building interactive web UIs using C# instead of JavaScript.
Boxing?
+
Converting a value type to an object type.
Build in .NET?
+
Compilation of code into IL.
Bundling and Minification?
+
Improves performance by reducing file sizes and number of requests.
Bundling and Minification?
+
Optimizing CSS and JS for performance.
Cache Tag Helper
+
This helper caches rendered HTML output on the server, improving performance for static or rarely changing UI sections.
Caching / Response Caching
+
Caching stores output to improve performance and reduce processing. Response caching stores HTTP responses, improving load time for repeated requests.
Caching in ASP.NET Core?
+
Improves performance by storing frequently accessed data.
Caching in ASP.NET?
+
Technique to store frequently used data for performance.
Caching in ASP.NET?
+
Caching stores frequently accessed data to improve performance using Output, Data, or Object Caching.It reduces server load, speeds up responses, and is ideal for static or rarely changing data.
caching?
+
Storing frequently accessed data in memory for faster response.
Can you create an app using both WebForms and MVC?
+
Yes, it is possible to host both in the same project. MVC can coexist with WebForms when routing is configured properly. This allows gradual migration. Both frameworks share the same runtime environment.
Cases where routing is not needed:
+
Routing is unnecessary for requests for static files like images/CSS or for direct WebForms/WebService calls.
Change Token
+
A Change Token is a notification mechanism used to monitor changes, such as configuration files or file-based caching. When a change occurs, the token triggers refresh or rebuild actions.
CI/CD?
+
Automation pipeline for building, testing, and deploying applications.
CI/CD?
+
Continuous Integration and Continuous Deployment pipeline automation.
CIL/IL?
+
Intermediate code that the CLR JIT-compiles into machine code, enabling language-independence and runtime optimization.
Circuit Breaker?
+
Polly-based approach to handle failing services.
Claim?
+
A user attribute such as name, email, role, or permission.
Claim-Based Authorization?
+
Authorization based on user claims such as email, age, department.
Claims?
+
User-specific attributes like name, id, role.
Claims-based authorization?
+
Authorization using claims stored in user identity.
class is used to return JSON in MVC?
+
JsonResult class is used to return JSON formatted data.
Class library?
+
A project that compiles to reusable DLL.
Client-side validation?
+
Validation executed in browser using JavaScript.
CLR?
+
Common Language Runtime that manages execution, memory, garbage collection, and security.
CLR?
+
Common Language Runtime executes .NET applications and manages memory, security, and exceptions.
CLS?
+
Common Language Specification – rules that all .NET languages must follow.
CLS?
+
Common Language Specification defines language rules .NET languages must follow.
Coarse-Grained Authorization?
+
Role-level access control.
Code behind an Inline Code?
+
Code-behind keeps design and logic separate using external .cs files. Inline code is written directly inside .aspx pages. Code-behind improves maintainability and reusability. Inline code is simpler but less structured.
Code First Migration?
+
Approach where database schema is created from C# models.
Column-Level Security?
+
Restricts access to specific columns.
command builds project?
+
dotnet build
command is used to scaffold projects?
+
dotnet new
command restores packages?
+
dotnet restore
command runs app?
+
dotnet run
Concepts of Globalization and Localization in .NET?
+
Globalization prepares an app to support multiple languages and cultures. Localization customizes the app for a specific culture using resource files. ASP.NET uses .resx files for language translation. These features help create multilingual web applications.
Conditional Access?
+
Authorization based on conditions like location or device.
Configuration / appsettings.json
+
Settings are stored in appsettings.json and accessed using IConfiguration.
Configuration System in .NET Core?
+
Instead of Web.config, .NET Core uses appsettings.json, environment variables, user secrets, and Azure KeyVault. It supports hierarchical and strongly typed configuration.
ConfigurationBuilder?
+
ConfigurationBuilder loads settings from multiple sources like JSON, XML, Azure, or environment variables. It provides flexible app configuration.
Connection Pooling?
+
Reuse of open database connections for performance.
Consent Screen?
+
User approval of requested permissions.
Containerization in ASP.NET Core?
+
Running application inside lightweight containers instead of full VMs.
Content Negotiation?
+
Mechanism to return JSON/XML based on Accept headers.
Content Negotiation?
+
Determines response format (JSON/XML) based on client request headers.
Controller in MVC?
+
Controller handles incoming requests, processes data, and returns responses.
Controller?
+
A controller handles incoming HTTP requests and returns responses such as JSON, views, or status codes. It follows MVC (Model-View-Controller) pattern.
ControllerBase?
+
Base class for API controllers (no views).
Convention-based routing?
+
Routing following default predefined conventions.
Cookie vs Token Auth?
+
Cookie is server-based; token is stateless.
Cookie-less Session:
+
When cookies are disabled, session data is tracked using URL rewriting. Session ID appears in the URL. Helps maintain session without browser cookies.
Cookies in ASP.NET?
+
Cookies store user data in the browser, such as username or session ID, for future requests.ASP.NET supports persistent and non-persistent cookies to enhance personalization and authentication.
CORS?
+
CORS (Cross-Origin Resource Sharing) allows or restricts browser requests from different origins. ASP.NET Core allows configuring allowed methods, headers, and domains.
CORS?
+
Security feature controlling which external domains may access server resources.
CORS?
+
Cross-Origin Resource Sharing that controls external access permissions.
Create .NET Core API project?
+
Use: dotnet new webapi -n MyApi
Cross-page posting in ASP.NET:
+
Cross-page posting allows a form to post data to another page using PostBackUrl property. The target page can access source page controls using PreviousPage property. Useful for multi-step forms.
Cross-Platform Compilation?
+
.NET Core/.NET can compile and run on Windows, Linux, or macOS. Developers can build apps once and run them anywhere.
CRUD API coding question?
+
Implement GET, POST, PUT, DELETE endpoints.
CSRF Protection
+
CSRF attacks force users to perform unintended actions. ASP.NET Core mitigates it using anti-forgery tokens and validation attributes.
CSRF?
+
Cross-site request forgery attack.
CSRF?
+
Cross-site request forgery where attackers perform unauthorized actions on behalf of users.
CSRF?
+
Cross-Site Request Forgery attack forcing authenticated users to execute unwanted actions.
CTS?
+
Common Type System – defines how types are declared and used in .NET.
CTS?
+
Common Type System ensures consistency of data types across all .NET languages.
Custom Action Filter coding?
+
Extend ActionFilterAttribute.
Custom Exception?
+
User-defined exception class.
Custom Middleware in ASP.NET Core
+
Custom middleware is created by writing a class with an Invoke or InvokeAsync method that accepts HttpContext. It is registered in the pipeline using app.Use(). Middleware can modify requests, responses, or pass control to the next component.
Custom Model Binding
+
Implement IModelBinder and register it using ModelBinderProvider.
Data Annotation?
+
Attribute-based validation such as [Required], [Email], [StringLength].
Data Annotations?
+
Attributes used for validation like [Required], [Email], [StringLength].
Data Binding?
+
Connecting UI elements with data sources.
Data Cache:
+
Data Cache stores frequently used data to improve performance. It supports expiration policies and dependency-based invalidation. Accessed through HttpRuntime.Cache.
Data controls available in ASP.NET?
+
ASP.NET provides several data-bound controls like GridView, ListView, Repeater, DataList, and FormView. These controls display and manipulate database records. They support sorting, paging, and editing features. They simplify data presentation.
Data Masking?
+
Hiding sensitive data based on policies.
Data Protection API?
+
Encrypting sensitive data.
Data Seeding?
+
Preloading default or sample data into database.
DbContext?
+
Class managing database connection and entity tracking.
DbSet?
+
Represents a database table.
Default project structure?
+
Minimal hosting model with Program.cs and optional folders for Models, Controllers, Services.
Default route format?
+
{controller}/{action}/{id}
Define Default Route:
+
The default route is {controller}/{action}/{id} with default values like Home/Index. It helps map incoming requests automatically.
Define DTO.
+
Data Transfer Object—used to expose safe API models.
Define Filters in MVC.
+
Filters allow custom logic before or after controller actions, such as authentication, logging, or error handling.
Define Output Caching in MVC.
+
Output caching stores the rendered output of an action to improve performance and reduce server processing.
Define Scaffolding in MVC:
+
Scaffolding automatically generates CRUD code and views based on the model. It speeds up development by providing a code structure quickly.
Define the 3 logical layers of MVC?
+
Presentation layer → View Business logic layer → Controller Data layer → Model
Delegate?
+
Type-safe function pointer.
Delegation?
+
Forwarding user's identity to downstream systems.
DenyAnonymousAuthorization?
+
Policy that allows only authenticated users.
Dependency Injection?
+
Dependency Injection (DI) is a design pattern where dependencies are injected rather than created internally. .NET Core has built-in DI support. It improves testability, maintainability, and loose coupling.
dependency injection?
+
A pattern where dependent services are injected rather than created inside a class.
Dependency Injection?
+
Improves maintainability, testability, and reduces coupling.
Dependency Injection?
+
Injecting required objects rather than creating them inside controller.
Deployment Slot?
+
Environment preview before production deployment, commonly in Azure.
Deployment?
+
Publishing application to server.
Describe application state management in ASP.NET.
+
Application State stores global data accessible to all sessions. It is stored in server memory and persists until restart. Useful for shared counters or configuration data. It reduces repeated data loading.
Describe ASP.NET MVC.
+
It is a lightweight Microsoft framework that follows MVC architecture for building scalable, testable web applications.
Describe login Controls in ASP.
+
Login controls simplify user authentication. Examples include Login, LoginView, LoginStatus, PasswordRecovery, and CreateUserWizard. They handle username validation, password reset, and security membership. They reduce custom coding effort.
DI (Dependency Injection)?
+
A design pattern where dependencies are provided rather than created inside a class.
DI Container?
+
Object lifetime and dependency management system.
DI for Controllers
+
ASP.NET Core injects dependencies into controllers via constructor injection. Services must be registered in ConfigureServices.
DI for Views
+
Views receive dependencies using @inject directive. This helps share services such as logging or localization.
DifBet .NET Core and .NET Framework?
+
.NET Core is cross-platform and modular; .NET Framework is Windows-only and monolithic.
DifBet ASP.NET MVC and WebForms?
+
MVC follows separation of concerns and doesn’t use ViewState, while WebForms uses event-driven model with ViewState.
DifBet Authentication and Authorization?
+
Authentication verifies identity; Authorization verifies permissions.
DifBet Claims and Roles?
+
Role is a type of claim for grouping permissions.
DifBet Code First and DB First in EF?
+
Code First generates DB from classes, Database First generates classes from DB.
DifBet Dataset and DataReader?
+
Dataset is disconnected; DataReader is connected and forward-only.
DifBet EF and EF Core?
+
EF Core is cross-platform, lightweight, and supports LINQ to SQL.
DifBet EXE and DLL?
+
EXE is an executable process; DLL is a reusable library.
DifBet GET and POST?
+
GET retrieves data; POST submits or modifies server data.
DifBet LINQ to SQL and Entity Framework?
+
LINQ to SQL is limited to SQL Server; EF supports multiple databases.
DifBet PUT and PATCH?
+
PUT replaces entire resource; PATCH updates part of it.
DifBet Razor and ASPX view engine?
+
Razor is cleaner, faster, and uses minimal markup compared to ASPX.
DifBet REST and SOAP?
+
REST is lightweight and stateless using JSON, while SOAP uses XML and is more structured.
DifBet Role-Based vs Permission-Based?
+
Role groups permissions, permission defines specific capability.
DifBet session and cookies?
+
Cookies store on client browser, sessions store on server.
DifBet Thread and Task?
+
Thread is OS-level entity; Task is a higher-level abstraction.
DifBet Value type and Reference type?
+
Value types stored in stack, reference types stored in heap.
DifBet ViewBag and ViewData?
+
ViewData is dictionary-based; ViewBag uses dynamic properties. Both are temporary and request-scoped.
DifBet WCF and Web API?
+
WCF supports protocols like TCP/SOAP; Web API is REST-based.
DifBet worker process and app pool?
+
App pool groups worker processes; worker process executes application.
DiffBet 3-tier and MVC?
+
3-tier architecture has Presentation, Business, and Data layers. MVC has Model, View, and Controller roles for UI pattern.
DiffBet ActionResult and ViewResult.
+
ActionResult is a base type that can return various results.
DiffBet ActionResult and ViewResult?
+
ActionResult is a base class for various result types (JsonResult, RedirectResult, etc.). ViewResult specifically returns a View. Controller methods can return either. ActionResult provides flexibility for different response formats.
DiffBet adding routes in WebForms and MVC.
+
WebForms uses file-based routing whereas MVC uses pattern-based routing.MVC routing maps URLs directly to controllers and actions.
DiffBet AddTransient, AddScoped, and AddSingleton?
+
Transient: New instance every request,Scoped: One instance per HTTP request,Singleton: Same instance for entire application lifetime
DiffBet ASP.NET Core and ASP.NET?
+
Core is cross-platform, lightweight, modular, and faster. Classic ASP.NET is Windows-only, uses System.Web, and is heavier.
DiffBet ASP.NET MVC 5 and ASP.NET Core MVC?
+
ASP.NET Core MVC is cross-platform, modular, open-source, and integrates Web API into MVC. MVC 5 works only on Windows and is more monolithic. Core also uses middleware instead of pipeline handlers.
DiffBet EF Core and EF Framework?
+
EF Core is lightweight, cross-platform, extensible, and faster than EF Framework. EF Framework supports only .NET Framework and lacks many modern features like batching, no-tracking queries, and shadow properties.
DiffBet HTTP Handler and HTTP Module:
+
Handlers handle and respond to specific requests directly. Modules work in the pipeline and intercept requests during processing. Multiple modules can exist for one request, but only one handler processes it.
DiffBet HttpContext.Current.Items and HttpContext.Current.Session:
+
Items is used to store data for a single HTTP request and is cleared after the request ends. Session stores data across multiple requests for the same user. Items is faster and used for request-level sharing.
DiffBet MVVM and MVC?
+
MVC uses Controller for request handling, View for UI, and Model for data. MVVM uses ViewModel to handle binding logic between View and Model. MVVM supports two-way binding, especially in UI frameworks. MVC is better for web apps, MVVM suits rich UIs.
DiffBet Server.Transfer and Response.Redirect:
+
Server.Transfer transfers execution to another page on the server without changing the URL. Response.Redirect sends the browser to a new page and changes the URL. Redirect performs a round trip to the client; Transfer does not.
DiffBet session and caching:
+
Session stores user-specific data and is used per user. Cache stores application-wide frequently used data to improve performance. Session expires when the user ends or times out, while cache expiry depends on policies like sliding or absolute expiration.
DiffBet TempData, ViewData, and ViewBag?
+
ViewData: Dictionary-based, valid only for current request. ViewBag: Wrapper around ViewData using dynamic properties. TempData: Persists only for the next request (used for redirects). 18) What is a partial view in MVC?
DiffBet View and Partial View.
+
A View renders the full UI while a Partial View renders a reusable section of the UI.
DiffBet View and Partial View?
+
A View renders a complete page layout. A Partial View renders only a portion of UI. Partial View does not include layout pages by default. Useful for reusable components.
DiffBet Web API and WCF:
+
Web API is lightweight and designed for RESTful services using HTTP. WCF supports multiple protocols like HTTP, TCP, and MSMQ. Web API is best for modern web/mobile services, WCF for enterprise SOA.
DiffBet Web Forms and MVC?
+
MVC is lightweight and testable; Web Forms is event-driven and stateful.
DiffBet WebForms and MVC?
+
WebForms are event-driven and stateful. MVC is lightweight, stateless, and supports testability. MVC offers full control over HTML. WebForms use server-side controls and ViewState.
Difference: app.Use vs app.Run?
+
app.Use() allows multiple middlewares; app.Run() terminates the pipeline and passes no further requests.
Different approaches to implement Ajax in MVC.
+
Using Ajax.BeginForm(), jQuery Ajax(), or Fetch API.
Different properties of MVC routes?
+
Key properties are URL, Defaults, Constraints, and DataTokens.
Different return types used by the controller action method in MVC?
+
Common return types are ViewResult, JsonResult, RedirectResult, ContentResult, FileResult, and ActionResult. ActionResult is the base type for most results.
Different Session state management options available in ASP.NET?
+
ASP.NET stores user-specific data across requests using InProc, StateServer, SQL Server, or Custom modes.InProc keeps data in memory, while StateServer and SQL Server store it externally, all server-side and secure.
Different validators in ASP.NET?
+
Controls like RequiredField, Range, Compare, Regex, Custom, and ValidationSummary ensure correct input on client and server sides.
Different ways for bundling and minification in ASP.NET Core?
+
Combine and compress scripts/styles to reduce size and improve performance, using tools like Webpack or NUglify.
directive reads environment?
+
app.Environment.IsDevelopment()
Directory Service?
+
Stores users, groups, and permissions (AD, LDAP).
Display something in CodeIgniter?
+
Use the controller to load a view. Example: $this->load->view("welcome_message"); The view outputs content to the browser. Models supply data if required.
DisplayFor vs EditorFor?
+
DisplayFor shows read-only UI; EditorFor creates editable fields.
DisplayTemplate?
+
Reusable Display UI with @Html.DisplayFor.
distributed cache providers are supported?
+
Redis, SQL Server, NCache.
Distributed Cache?
+
Cache shared across multiple servers (Redis, SQL).
Distributed Tracing?
+
Tracing requests across microservices.
Distributed Tracing?
+
Tracking request flow through microservices with correlation IDs.
do you mean by partial view of MVC?
+
A partial view is a reusable view component used to render partial UI, such as headers or menus.
Docker in .NET context?
+
Run .NET apps in portable containers for easy deployment, scaling, and microservices.
Docker?
+
Containerization platform used to package and deploy applications.
Docker?
+
Container platform for packaging and deploying applications.
does MVC represent?
+
Model = business logic/data, View = UI, Controller = handles request and updates View.
dotnet CLI?
+
Command line interface for building and running .NET applications.
Drawbacks of MVC model:
+
More development complexity, steep learning curve, and requires stronger knowledge of patterns.
DTO?
+
Data Transfer Object used to transfer lightweight data.
Dynamic Authorization?
+
Real-time decision-based authorization.
Eager Loading?
+
Loads related data via Include().
EditorTemplate?
+
Reusable Editable UI with @Html.EditorFor.
EF Core optimization coding?
+
Use Select, AsNoTracking, Include.
EF Core?
+
Object-relational mapper for .NET Core.
EF Core?
+
Modern lightweight ORM for database access.
EF Migration?
+
Feature to update database schema using version-controlled code.
Enable CORS
+
CORS is configured using services.AddCors() and enabled with app.UseCors(). It allows cross-domain API access.
Enable CORS in API?
+
services.AddCors(); app.UseCors(...);
Enable CORS?
+
Using middleware: app.UseCors()
Enable JWT in API?
+
AddAuthentication().AddJwtBearer(...).
Enable Response Caching?
+
services.AddResponseCaching(); app.UseResponseCaching();
Encapsulation?
+
Bundling data and methods inside a class.
Endpoint Routing?
+
Modern routing system introduced to unify MVC, Razor Pages, and SignalR routing.
Ensure Web API returns JSON only?
+
Remove XML formatters and keep only JSON formatter in WebApiConfig. Example: config.Formatters.Remove(config.Formatters.XmlFormatter);. Now the API always responds in JSON format. Useful for modern REST services.
Enterprise Library:
+
Enterprise Library provides reusable software components like Logging, Data Access, Validation, and Exception Handling. Helps build enterprise-level maintainable applications.
Entity Framework?
+
ORM for accessing databases using objects.
Entity Framework?
+
An ORM that maps databases to .NET objects, supporting LINQ, migrations, and simplified data access.
Entity Framework?
+
ORM framework to interact with database using C# objects.
Environment Variable in ASP.NET Core?
+
External configuration determining environment (Development, Staging, Production).
Environment Variable?
+
Configuration used to define environment (Development, Staging, Production).
Error handling middleware?
+
Middleware for diagnostics and custom error responses (e.g., DeveloperExceptionPage, ExceptionHandler).
Error Handling Strategies
+
Use middleware like UseExceptionHandler, logging, global filters, and status code pages.
Event?
+
Notification triggered using delegates.
Examples of HTML Helpers?
+
TextBoxFor, DropDownListFor, LabelFor, HiddenFor.
Exception Handling?
+
Mechanism to handle runtime errors using try/catch/finally.
Execute any MVC project?
+
Build the project → Run IIS Express/Local host → Routing selects controller → Action returns view → Output is rendered in browser.
Explain ASP.NET Core.
+
It is a cross-platform, open-source framework for building modern web applications. It provides high performance, modular design, and supports MVC, Razor Pages, Web APIs, and SignalR.
Explain Dependency Injection.
+
DI provides loose coupling by injecting required services at runtime. ASP.NET Core has DI support built-in.
Explain in brief the role of different MVC components.
+
Model manages logic and data. View is responsible for UI.Controller acts as a bridge processing user requests and returning responses.
Explain Model, View, and Controller in Brief.
+
Model holds application logic and data. View displays data to the user. Controller handles user input, interacts with Model, and selects the View to render.
Explain Request Pipeline.
+
Request flows through middleware components configured in Program.cs (pre .NET 6: Startup.cs) before generating a response.
Explain separation of concern.
+
It divides an application into distinct sections, each responsible for a single concern, reducing dependency.
Explain some benefits of using MVC.
+
It supports separation of concerns, easy testing, clean code structure, and supports TDD. It’s extensible and suitable for large applications.
Explain TempData, ViewData, ViewBag.
+
TempData: Stores data temporarily across redirects.
Explain the MVC Application life cycle.
+
It includes: Application Start → Routing → Controller Initialization → Action Execution → Result Execution → View Rendering → Response sent to client.
Explicit Allow?
+
Specific rule allows access.
Explicit Deny?
+
Rule that overrides all allows.
Extension Method?
+
Add new methods to existing types without modifying them.
external authentication?
+
Login using Google, Microsoft, Facebook, GitHub providers.
Feature Toggle?
+
Enables or disables features dynamically.
Features of MVC?
+
MVC supports separation of concerns. It promotes testability, flexibility, and clean architecture. Provides routing, Razor syntax, and built-in validation. Ideal for large, scalable web applications.
Federation in Authorization?
+
Trust relationship between identity providers and applications.
File extension for Razor views?
+
.cshtml
File extensions for Razor views?
+
Razor views use: .cshtml for C# .vbhtml for VB.NET These files support inline Razor syntax.
file replaces Web.config in ASP.NET Core?
+
appsettings.json
FileResult?
+
Returns files like PDF, images, or documents.
Filter in MVC?
+
Reusable logic executed before or after action methods.
Filter types?
+
Authorization, Resource, Action, Exception, Result filters.
Filters executed at the end:
+
Result filters are executed at the end, just before and after the view is rendered.
Filters in ASP.NET Core?
+
Run pre- or post-action logic like validation, logging, caching, or authorization in controllers.
Filters in MVC Core?
+
Reusable logic executed before or after actions.
Filters?
+
Components to run code before/after actions.
Fine-Grained Authorization?
+
Permission-level control instead of role-level.
FormCollection?
+
Object storing form values submitted by user.
Forms Authentication?
+
User logs in through custom login form.
Framework-Dependent Deployment?
+
App runs on an installed .NET runtime, producing a smaller executable.
Frontchannel Communication?
+
Browser-based token communication.
GAC : Global Assembly Cache?
+
Stores shared .NET assemblies for multiple apps, supporting versioning and avoiding DLL conflicts.
Garbage Collection (GC)?
+
Automatic memory management that removes unused objects.
Garbage Collection?
+
Automatic memory cleanup of unused objects.
GC generations?
+
Gen 0, Gen 1, Gen 2 used to optimize memory cleanup.
Generic Repository?
+
A reusable data access pattern that works with any entity type to perform CRUD operations.
GET and POST Action types:
+
GET retrieves data and does not modify state. POST submits data and is used for creating or updating records.
Global exception handling coding?
+
Create custom exception middleware.
Global Exception Handling?
+
Error handling applied across entire application using middleware.
Global.asax?
+
Application-level events like Start, End, Error.
GridView Control:
+
GridView displays data in a tabular format and supports sorting, paging, and editing. It binds to data sources like SQL, lists, or datasets. It provides templates and commands for customization.
gRPC in .NET?
+
High-performance, protocol-buffer-based communication for microservices, faster than REST.
gRPC?
+
High-performance communication protocol using binary messaging and HTTP/2.
gRPC?
+
High-performance RPC protocol using HTTP/2 for communication.
GZip Compression?
+
Compressing responses to reduce payload size.
Handle 404 in ASP.NET Core?
+
Use middleware such as: app.UseStatusCodePages();
HATEOAS?
+
Responses include links to guide client navigation.
HATEOAS?
+
Hypermedia as Engine of Application State — constraint of REST API.
Health Check Endpoint?
+
Endpoint to verify system status and dependencies.
Health Check endpoint?
+
Used for monitoring health status and dependencies like DB or Redis.
Health Check in .NET Core?
+
Monitor app and dependency status, useful for Kubernetes and cloud deployments.
Health Checks?
+
Endpoints that report app health.
Host in ASP.NET Core?
+
Manages DI, configuration, logging, and middleware; includes WebHost and GenericHost.
Host?
+
Host manages app lifetime, DI container, config, and logging. It’s core runtime container.
Host?
+
Host manages app lifetime, configuration, logging, DI, and environment.
HostedService?
+
Interface for background tasks.
Hot Reload?
+
Hot Reload allows modifying code while the application is running. It improves productivity by reducing restart time.
Hot Reload?
+
Feature allowing code changes without restarting application.
How authorize multiple roles?
+
[Authorize(Roles=\Admin Manager\")]"
How execute Stored Procedures?
+
Use FromSqlRaw().
How implement Pagination?
+
Use Skip() and Take().
How prevent privilege escalation?
+
Validate authorization checks on every sensitive action.
How prevent SQL Injection?
+
Use parameterized queries and stored procedures.
How register EF Core?
+
services.AddDbContext(options => options.UseSqlServer(...));
How return IActionResult?
+
Use Ok(), NotFound(), BadRequest(), Created().
How Seed Data?
+
Use HasData() inside OnModelCreating().
How upload files?
+
Use IFormFile parameter.
HTML Helper?
+
Methods that generate HTML controls programmatically in views.
HTML server controls in ASP.NET?
+
HTML controls become server controls by adding runat="server". They behave like programmable server-side objects. They allow event handling and server processing.
HTTP Handler?
+
An HttpHandler is a component that processes individual HTTP requests. It acts as an endpoint for file extensions like .aspx, .ashx, .jpg etc. It is lightweight and best for custom resource generation.
HTTP Logging Middleware?
+
Logs details about incoming requests and responses.
HTTP Status Codes?
+
200 OK, 201 Created, 400 Bad Request, 401 Unauthorized, 404 Not Found, 500 Server Error.
HTTP Verb Mapping?
+
Mapping controller actions to verbs using [HttpGet], [HttpPost], etc.
HTTP Verb?
+
Operations like GET, POST, PUT, DELETE mapped to actions.
HttpClientFactory?
+
Factory pattern to create and manage HttpClient instances.
HttpModule?
+
Windows-only ASP.NET components that handle HTTP request/response events in the pipeline.
HTTPS Redirection Middleware?
+
Forces application to use secure connection.
HTTPS Redirection?
+
Force HTTPS using app.UseHttpsRedirection().
IActionFilter?
+
Interface for implementing custom filters.
IActionResult?
+
Base interface for different action results.
IActionResult?
+
Base interface for action results in ASP.NET Core MVC.
IAM?
+
Identity and Access Management.
IAuthorizationService?
+
Service to manually invoke authorization programmatically.
IConfiguration?
+
Interface used to access application configuration values.
IConfiguration?
+
Interface used to read configuration data.
Idempotency?
+
Operation that produces the same result when repeated.
Identity Framework?
+
Built-in membership system for authentication and user roles.
Identity Provider (IdP)?
+
Service that authenticates users.
IdentityServer?
+
OAuth2/OpenID Connect framework for authentication and authorization.
IHttpClientFactory?
+
Factory for creating HttpClient instances safely.
IHttpClientFactory?
+
IHttpClientFactory creates and manages HttpClient instances to avoid socket exhaustion and improve performance in Web API calls.
IHttpClientFactory?
+
ASP.NET Core factory for creating and managing HttpClient instances.
IHttpClientFactory?
+
Factory pattern for creating optimized HttpClient instances.
IHttpContextAccessor?
+
Used to access HTTP context in non-controller classes.
IIS Integration?
+
In Windows hosting, Kestrel works behind IIS. IIS handles SSL, load balancing, and process management, while Kestrel executes the request pipeline.
IIS?
+
Web server for hosting ASP.NET apps.
IIS?
+
Internet Information Services — a Windows web server.
ILogger?
+
Logging interface used for tracking application events.
Impersonation?
+
Executing code under another user's identity.
Impersonation?
+
Execute actions under another user's identity.
Implement Ajax in MVC?
+
Using @Ajax.BeginForm() and AjaxOptions. You can call actions asynchronously using jQuery AJAX. The server returns JSON or partial views. This improves performance without full page reloads.
Implement MVC Forms authentication:
+
Forms authentication uses login pages, authentication cookies, and AuthorizeAttribute to protect secured pages.
Implicit Deny?
+
If no rule allows it, access is denied.
Importance of NonActionAttribute?
+
It marks a method in a controller as not an action method. This prevents it from being executed via URL routing. Useful for helper methods within controllers. Enhances security and routing control.
Improve API Performance?
+
Caching, AsNoTracking, async queries, efficient queries.
Improve ASP.NET performance:
+
Use caching, compression, output caching, and minimized ViewState. Optimize SQL queries and enable async processing. Reduce server round trips and bundling/minifying scripts.
Inheritance?
+
Deriving classes from base classes.
In-memory vs Distributed Cache
+
In-memory caching stores data on the server and is best for single-instance apps. Distributed caching uses Redis or SQL Server and supports load-balanced environments.
Interface?
+
Contract specifying methods without implementation.
IOptions pattern?
+
Method to bind strongly-typed settings from configuration to C# classes.
IOptions pattern?
+
Used to map configuration sections to strongly typed classes.
Is ASP.NET Core open source?
+
Yes, it is developed under the .NET Foundation and is fully open source.
Is DI built-in in ASP.NET Core?
+
Yes, ASP.NET Core has built-in DI support.
Is MVC stateless?
+
Yes, MVC follows stateless architecture where every request is independent.
JIT Compiler?
+
Just-In-Time compiler that converts IL code to native machine code.
JIT compiler?
+
Converts IL to native code at runtime, optimizing performance and memory; types include Pre-JIT, Econo-JIT, Normal-JIT.
JIT compiler?
+
Just-in-Time compiler converts IL code to machine code during runtime.
JSON global config?
+
builder.Services.Configure(...).
JSON Serialization?
+
Converting objects into JSON format for transport or storage.
JSON Serializer used?
+
System.Text.Json (default), with option to use Newtonsoft.Json.
JSON.stringify?
+
Converts JavaScript object into JSON format for ajax posts.
JsonResult?
+
Returns JSON formatted response.
Just-In-Time Access (JIT)?
+
Provide temporary elevated permissions.
JWT Authentication?
+
JWT (JSON Web Token) is a token-based authentication method used in microservices and APIs. It stores claims and is stateless, meaning no session storage is required.
JWT creation coding?
+
Use JwtSecurityTokenHandler to generate token.
JWT Token?
+
Stateless token format used for authentication.
JWT?
+
A compact, self-contained token for securely transmitting claims between parties.
JWT?
+
JSON Web Token for stateless authentication between client and server.
JWT?
+
JSON Web Token used for bearer authentication.
Kerberos?
+
Secure ticket-based authentication protocol.
Kestrel Server?
+
Kestrel is the default lightweight web server in ASP.NET Core. It is fast, cross-platform, and optimized for high-performance apps.
Kestrel?
+
Cross-platform lightweight web server for ASP.NET Core.
Kestrel?
+
A lightweight, cross-platform web server used by ASP.NET Core applications.
Key DifBet ASP.NET and ASP.NET Core?
+
ASP.NET Core is cross-platform, modular, open-source, and faster compared to ASP.NET Framework.
Kubernetes?
+
Container orchestration platform used to deploy microservices.
Latest version of ASP.NET Core?
+
The latest stable version of ASP.NET Core (as of December 2025) follows the latest .NET release: ASP.NET Core 10.0 — shipped with .NET 10 on November 11, 2025.
LaunchSettings.json in ASP.NET Core?
+
This file stores environment and profile settings for the application during development. It defines the application URL, SSL settings, and environment variables like ASPNETCORE_ENVIRONMENT. It helps configure debugging profiles for IIS Express or direct execution.
Layout page?
+
Template defining common design elements such as header and footer.
Layout Page?
+
Master template providing shared UI like header/footer across multiple views.
Lazy Loading?
+
Loads navigation properties on first access.
Least Privilege Access?
+
Users receive minimal required permissions.
library supports resiliency?
+
Polly.
LINQ?
+
Query syntax integrated into C# to query collections/databases.
LINQ?
+
LINQ (Language Integrated Query) allows querying data from collections, databases, XML, etc. using C# syntax. It improves code readability and eliminates SQL string errors.
LINQ?
+
Query syntax for querying data collections, SQL, XML, and EF.
LINQ?
+
Query syntax used to retrieve data from collections or databases.
List HTTP methods.
+
GET, POST, PUT, PATCH, DELETE, OPTIONS.
Load Balancing?
+
Distribute requests across servers.
Load Balancing?
+
Distributing application traffic across multiple servers for performance and redundancy.
Lock statement?
+
Prevents multiple threads from accessing code simultaneously.
Logging in .NET Core?
+
.NET Core provides built-in logging with providers like Console, Debug, Serilog, and Application Insights. It helps monitor app behavior and errors.
Logging in ASP.NET Core?
+
Built-in framework to log information using ILogger.
Logging in MVC Core?
+
Capturing application logs via ILogger and providers.
logging providers are supported?
+
Console, Debug, Azure App Insights, Seq, Serilog.
Logging Providers?
+
Serilog, NLog, Seq, Application Insights.
Logging System
+
Built-in support for console, file, Application Insights, SeriLog, etc.
Logging?
+
System to capture and store application logs.
Machine.config?
+
System-wide configuration file for .NET Framework.
Main DiffBet MVC and Web API?
+
MVC is used to return views (HTML) for web applications. Web API is used to build RESTful services and returns data formats like JSON or XML. MVC is UI-focused, whereas Web API is service-focused. Web API can be used by mobile, IoT, and web clients.
Maintain the sessions in MVC?
+
Session can be maintained using Session[], cookies, TempData, ViewBag, QueryString, and Hidden fields.
Major events in global.aspx?
+
Common events include Application_Start, Session_Start, Application_BeginRequest, Session_End, and Application_End. These events manage application life cycle tasks. They handle logging, caching, and security logic. They execute globally for the entire application.
Managed Code?
+
Code executed under the supervision of CLR.
master pages in ASP.NET?
+
Master pages define a common layout for multiple web pages. Content pages inherit this layout to maintain consistent UI. They reduce duplication of HTML code. Common parts like headers, footers, and menus are shared.
Master Pages:
+
Master Pages define a common layout for multiple pages. Content pages fill placeholders within the master. Useful for consistency and easier maintenance.
Message Queues?
+
Kafka, RabbitMQ, Azure Service Bus.
Metadata in .NET?
+
Information about types, methods, references stored with assemblies.
Methods of session maintenance in ASP.NET:
+
ASP.NET provides several ways to maintain sessions, including In-Process (InProc), State Server, SQL Server, and Custom session state providers. Cookies and cookieless sessions are also used. These mechanisms help store user-specific data across requests.
MFA?
+
Multi-factor authentication using multiple methods.
Microservices Architecture?
+
Architecture pattern where the application is composed of independent services.
Microservices architecture?
+
System divided into small loosely coupled services.
Middleware components?
+
Pipeline components that process HTTP requests and responses in sequence.
Middleware Concept
+
Middleware are components processing requests in sequence.
Middleware in ASP.NET Core?
+
Pipeline components that process HTTP requests/responses, e.g., authentication, routing, logging, CORS.
Middleware Pipeline?
+
Requests pass through ordered middleware, each handling logic before forwarding.
Middleware Pipeline?
+
Sequential execution of request-processing components in ASP.NET Core.
Middleware?
+
A pipeline component that processes HTTP requests and responses. it is lightweight, runs cross-platform, and fully configurable in code.
middleware?
+
Components that process HTTP requests in ASP.NET Core pipeline.
Migration commands?
+
dotnet ef migrations add Name; dotnet ef database update
Migrations?
+
System for applying and tracking database schema changes.
Minification and Bundling used?
+
They reduce file size and combine multiple CSS/JS files to improve performance.
Minimal API?
+
Define routes using MapGet(), MapPost(), MapPut(), etc.,Lightweight syntax for defining endpoints without controllers.Lightweight HTTP endpoints with minimal code, ideal for microservices and prototypes.
Minimal API?
+
Lightweight HTTP API setup introduced in .NET 6 using minimal hosting model.
Mocking Framework?
+
Tools like MOQ used to simulate dependencies during testing.
Mocking?
+
Simulating dependencies using fake objects.
Modal Binding in Razor Pages?
+
Mapping form inputs automatically to page properties.
Model Binder?
+
Maps request data to models automatically.
Model Binding
+
Automatically maps form, query string, and JSON data to model classes.
Model Binding?
+
Maps HTTP request data to C# parameters automatically. Model binding maps incoming request data to method parameters or model objects automatically. It simplifies request handling in MVC and Web API.
Model Binding?
+
Automatic mapping of HTTP request data to action method parameters.
Model Binding?
+
Automatic mapping of request data to method parameters or models.
Model Binding?
+
Automatic mapping of HTTP request data to model objects.
Model in MVC?
+
Model represents application data and business logic.
Model Validation
+
Uses Data Annotations and custom validation attributes.
Model Validation?
+
Ensures incoming data meets rules via DataAnnotations.
Model Validation?
+
Ensures input values meet defined requirements before processing.
Model Validation?
+
Ensuring input meets validation rules before processing.
ModelState?
+
Stores the state of model binding and validation errors.
Model-View-Controller?
+
MVC is a design pattern that separates an application into Model, View, and Controller components.
Monolith Architecture?
+
Single deployable unit with tightly coupled components.
Monolithic architecture?
+
Single deployable unit with tightly-coupled components.
MSIL?
+
Intermediate language generated from .NET code before JIT compilation.
Multicast Delegate?
+
Delegate pointing to multiple methods.
Multiple environments
+
Configured using ASPNETCORE_ENVIRONMENT variable (Dev, Staging, Prod).
MVC Architecture
+
Separates application logic into Model, View, Controller.
MVC Components
+
Model stores data, View displays UI, Controller handles requests.
MVC in AngularJS?
+
AngularJS follows an MVC-like architecture. Model holds data, View represents the UI, and Controller manages logic. It helps in clear separation of concerns in client-side apps. Angular automates data binding between Model and View.
MVC in ASP.NET Core?
+
Model-View-Controller pattern used for web UI and API development.
MVC Page life cycle stages:
+
Stages include Routing, Controller initialization, Action execution, Result execution, and View rendering.
MVC Routing?
+
Maps URL patterns to controller actions.
MVC works in Spring?
+
Spring MVC uses DispatcherServlet as the front controller. It routes requests to controllers. Controllers return Model and View data. The ViewResolver renders the final response.
MVC?
+
A design pattern dividing application logic into Model, View, Controller.
MVC?
+
MVC stands for Model-View-Controller architecture separating UI, data, and logic.
MVC?
+
MVC (Model-View-Controller) separates business logic, UI, and request handling into Model, View, and Controller.This improves testability, maintainability, scalability, and is widely used for modern web applications.
Name the assembly in which the MVC framework is typically defined.
+
ASP.NET MVC is mainly defined in the System.Web.Mvc assembly.
Namespace?
+
A container for organizing classes and types.
Navigate from one view to another using a hyperlink?
+
Use the Html.ActionLink() helper in MVC. Example: @Html.ActionLink("Go to About", "About", "Home"). This generates an anchor tag with route mapping. Clicking it redirects to the specified view.
Navigation between views example.
+
Using hyperlink: Go to About.
Navigation techniques:
+
Navigation in ASP.NET uses Hyperlinks, Response.Redirect, Server.Transfer, Cross-page posting, and Site Navigation controls like Menu and TreeView. It helps users move between pages.
New features in ASP.NET Core?
+
Dependency Injection built-in, cross-platform, unified MVC+Web API, lightweight middleware pipeline, and performance improvements.Enhanced Minimal APIs, improved performance, better real-time support, updated security, and stronger observability tools.
New in .NET Core 2.1 / ASP.NET Core 2.1?
+
Features include Razor Class Libraries, HTTPS by default, SPA templates, SignalR support, and GDPR compliance tools. It also introduced global tools, improved performance, and simplified identity UI.
Non-Repudiation?
+
Ensuring actions cannot be denied by users.
N-Tier architecture?
+
Layers like UI, Business, Data Access.
NTLM?
+
Windows challenge-response authentication protocol.
NuGet?
+
NuGet is the package manager for .NET. Developers use it to download, share, and manage libraries. It supports dependency resolution and automatic updates.
NuGet?
+
Package manager for .NET libraries.
Nullable type?
+
Represents value types that can be null.
NUnit/MSTest?
+
Unit testing frameworks for .NET.
OAuth Refresh Token Rotation?
+
Invalidating old refresh token when issuing a new one.
OAuth vs SAML?
+
OAuth is authorization; SAML is authentication using XML.
OAuth?
+
Open standard for secure delegated access.
OAuth2 Authorization Code Flow?
+
Secure flow used by web apps requiring user login.
OAuth2 Client Credentials Flow?
+
Service-to-service authorization.
OAuth2 Implicit Flow?
+
Legacy browser flow not recommended.
OAuth2?
+
Delegated authorization framework for delegated access.
OAuth2?
+
Authorization framework allowing delegated access using tokens.
OOP?
+
Programming model using classes, inheritance, and polymorphism.
OpenID Connect?
+
Authentication layer on top of OAuth2 for user login and identity management.
OpenID Connect?
+
Authentication layer built on top of OAuth 2.0.
OpenID Connect?
+
Identity layer on top of OAuth 2.0.
OpenID Connect?
+
Identity layer on top of OAuth for login authentication.
Optimistic Concurrency?
+
Use [Timestamp]/RowVersion to prevent data overwrites via row-version checks.
Options Pattern
+
Used to bind strongly typed classes to configuration sections.
Order of filter execution in MVC
+
Order: 1. Authorization Filters 2. Action Filters 3. Result Filters 4. Exception Filters Execution occurs in a defined pipeline sequence.
Ordering execution when multiple filters are used:
+
Filters run in the order: Authorization → Action → Result → Exception filters. Custom ordering can also be defined using the Order property.
OutputCache?
+
Caching mechanism used in MVC Framework to improve response time.
OWIN and ASP.NET Core
+
OWIN was designed to decouple web servers from web applications. ASP.NET Core builds on the same lightweight pipeline concept but replaces OWIN with a more flexible middleware model.
package enables Swagger?
+
Swashbuckle.AspNetCore
Page directives in ASP.NET:
+
Page directives provide configuration and instruction to the compiler. Examples include @Page, @Import, @Master, and @Control. They define attributes like language, inheritance, and code-behind file.
Pagination coding question?
+
Implement Skip(), Take(), and metadata.
Pagination in API?
+
Return data with totalCount, pageNo, pageSize.
Partial Class?
+
Split class across multiple files.
Partial view in MVC?
+
A partial view is a reusable piece of UI code. It works like a user control and avoids code duplication. It is rendered inside another view. Useful for menus, headers, and reusable content blocks.
Partial View?
+
Reusable view component shared across multiple views.
Partial View?
+
Reusable UI component used in multiple views.
Partial Views
+
Partial views reuse UI sections like menus or forms. They reduce code duplication and improve maintainability.
Parts of JWT?
+
Header, Payload, Signature.
PBAC?
+
Policy-Based Access Control.
Permission?
+
A specific capability like Read, Write, or Delete.
Permission-Based API Authorization?
+
APIs check user permissions before actions.
PKCE?
+
Enhanced security for mobile and SPA apps.
Points to remember while creating MVC application?
+
Maintain separation of concerns. Use routing properly for readability. Keep business logic in the Model or services. Use ViewModels instead of exposing database models.
Policies in authorization?
+
Reusable authorization rules defined using AddAuthorization.
Policy Decision Point (PDP)?
+
Component that evaluates authorization policy.
Policy Enforcement Point (PEP)?
+
Component that checks access rules.
Policy-Based Authorization?
+
Define custom authorization rules inside AddAuthorization().
Polymorphism?
+
Ability to override methods for different behavior.
Post-Authorization Logging?
+
Record actions taken after authorization.
PostBack property:
+
IsPostBack indicates whether the page is loaded first time or due to a user action like a button click. It helps avoid re-binding data unnecessarily. Useful for improving performance.
PostBack?
+
When a page sends data to the server and reloads itself.
Prevent CSRF?
+
Anti-forgery tokens and SameSite cookies.
Prevent SQL Injection?
+
Parameterized queries/EF Core.
Principle of Least Privilege?
+
Users get only required permissions.
Privilege Escalation?
+
Attack where user gains unauthorized permissions.
Privileged Access Management (PAM)?
+
System to monitor and control high-privilege accounts.
Program.cs used for?
+
Defines application bootstrap, host builder, and startup configuration.
Program.cs?
+
Entry point that configures the host, services, and middleware.
Purpose of MVC pattern?
+
To separate concerns and make application maintainable, testable, and scalable.
Query String in ASP?
+
Query strings pass values through the URL during page requests. They are used for lightweight data transfer. A query string starts after a ? in the URL. It is visible to users, so sensitive data should not be stored.
Rate Limiting?
+
Restricting how many requests a client can make.
rate limiting?
+
Controlling request frequency to protect system resources.
Rate Limiting?
+
Controls request frequency to prevent abuse.
Razor Pages in ASP.NET Core?
+
Page-focused ASP.NET Core model with combined view and logic, ideal for CRUD apps.
Razor Pages?
+
A page-focused ASP.NET Core model where each page has its own UI and logic, ideal for simpler web apps.
Razor Pages?
+
A page-based framework for building UI similar to MVC but simpler.
Razor Pages?
+
Page-based model alternative to MVC introduced in .NET Core.
Razor View Engine?
+
Syntax for rendering HTML with C# code.
Razor View Engine?
+
Lightweight syntax for writing server-side code inside HTML.
Razor view file extensions:
+
.cshtml (C# Razor) and .vbhtml (VB Razor) are used for Razor views.
Razor?
+
Razor is a templating engine used in ASP.NET MVC and Razor Pages. It combines C# with HTML to generate dynamic UI. It is lightweight, fast, and easy to use.
Razor?
+
A markup syntax in ASP.NET for embedding C# into views.
RBAC?
+
Role-Based Access Control.
Real-life example of MVC?
+
A shopping website: Model: Product data View: Product display page Controller: User actions like Add to Cart They work together to complete functionality.
RedirectToAction()?
+
Redirects browser to another action or controller.
Redis caching coding?
+
AddStackExchangeRedisCache().
Redis?
+
Fast distributed in-memory caching system.
Redis?
+
In-memory distributed caching system.
Reflection?
+
Inspecting metadata and creating objects dynamically at runtime.
Refresh Token?
+
A long-lived token used to obtain new access tokens without re-login.
Remoting?
+
Legacy communication between .NET applications.
RenderBody vs RenderPage:
+
RenderBody() outputs the content of the child view in layout. RenderPage() inserts another Razor page inside a view like a partial.
RenderBody() outputs the content of the child view in layout. RenderPage() inserts another Razor page inside a view like a partial.
+
Additional Questions
Repository Pattern?
+
Abstraction layer over data access.
Repository Pattern?
+
Abstraction layer separating business logic from data access logic.
Repository Pattern?
+
A pattern separating data access layer from business logic.
Request Delegate?
+
A delegate such as RequestDelegate handles HTTP requests and responses inside middleware.
Resource Server?
+
API that verifies and uses access tokens.
Resource?
+
A data entity identified by a URI like /users/1.
Resource-Based Authorization?
+
Authorization rules applied based on a specific resource instance.
Response Compression?
+
Compresses HTTP responses using gzip/br or deflate.
Response Compression?
+
Compressing HTTP output for faster response.
REST API?
+
API that adheres to REST principles such as statelessness, resource identification, caching.
REST?
+
An architectural style using stateless communication over HTTP with resources.
REST?
+
Representational State Transfer — stateless communication using HTTP verbs.
Retry Policy?
+
Automatic retry logic for failed external calls.
Return PartialView()?
+
Returns only partial content without layout.
Return types of an action method:
+
Returns include ViewResult, JsonResult, RedirectResult, ContentResult, FileResult, and ActionResult.
Return View()?
+
Returns a full view to the browser.
reverse proxy?
+
Middleware forwarding requests from IIS/Nginx to Kestrel.
Role of ActionFilters in MVC?
+
ActionFilters allow you to run logic before or after an action executes. They help in cross-cutting concerns like logging, authentication, caching, and exception handling. Filters can be applied at the controller or method level. Examples include: Authorize, HandleError, and OutputCache.
Role of Configure() method?
+
Defines the request handling pipeline using middleware like routing, authentication, static files, etc.
Role of ConfigureServices()
+
Used to register services like DI, EF Core, identity, logging, and custom services.
Role of IHostingEnvironment?
+
Provides environment-specific info like Development, Production, and staging.
Role of Middleware
+
Authentication, logging, routing, exception handling.
Role of MVC components:
+
Presentation (View) shows data, Abstraction (Model) handles logic/data, Control (Controller) manages requests and updates.
Role of MVC in AngularJS?
+
MVC helps structure the application for maintainability. Model stores data, View displays data using HTML, and Controller updates data. Angular’s two-way binding keeps Model and View synchronized. It helps in scaling complex front-end applications.
Role of Startup class?
+
It configures application services via ConfigureServices() and request pipeline via Configure().
Role of WebHost.CreateDefaultBuilder()?
+
Configures default settings like Kestrel, logging, config, ENV detection.
Role?
+
A named group of permissions.
Role-Based Authorization?
+
Restrict access using roles, e.g., [Authorize(Roles="Admin")].
RouteConfig.cs?
+
Contains registration logic for routing in MVC Framework.
Routes difference in WebForm vs MVC:
+
WebForms use file-based routing, MVC uses pattern-based routing with controllers and actions.
Routing
+
Maps URLs to controllers and actions using UseRouting() and MapControllerRoute().
routing and three segments?
+
Routing is the process of mapping incoming URLs to controller actions. The default pattern contains three segments: {controller}/{action}/{id}. It helps in SEO-friendly and user-readable URLs.
Routing carried out in MVC?
+
Routing engine matches the URL with route patterns from the RouteConfig and executes the mapped controller and action.
Routing in MVC?
+
Routing maps URLs to corresponding Controller actions.
routing in MVC?
+
Routing maps incoming URL requests to specific controllers and actions.
Routing is done in the MVC pattern?
+
Routing is handled by a RouteConfig.cs file (or Program.cs in .NET Core). ASP.NET MVC uses pattern matching to map URLs to controllers. Routes are registered at application startup. Based on the URL, MVC identifies which controller and action to execute.
Routing is not required?
+
1. Serving static files (images, CSS, JS). 2. Accessing .axd resource handlers. Routing bypasses these requests automatically. 31) Features of MVC?
Routing Types
+
Convention-based routing and attribute routing.
Routing?
+
Matches HTTP requests to endpoints.
routing?
+
Route mapping of URLs to controller actions.
Routing?
+
Mapping incoming URLs to controller actions or endpoints.
Row-Level Security?
+
User can only access specific rows based on rules.
Rules of Razor syntax:
+
Razor starts with @, supports IntelliSense, has clean HTML mixing, and minimizes closing tags compared to ASPX.
runtime does ASP.NET Core use?
+
.NET 5/6/7/8 (Unified .NET runtime).
Runtime Identifiers (RID)?
+
RID represents the platform where an app runs (e.g., win-x64, linux-arm64). Used for publishing self-contained apps.
Scaffolding?
+
Automatic generation of CRUD code for model and views.
Scope Creep?
+
Unauthorized expansion of delegated access.
Scope in OAuth2?
+
Defines what access the client is requesting.
Scoped lifetime?
+
Service created once per request.
Scoped lifetime?
+
One instance per HTTP request.
Scoped lifetime?
+
Creates one instance per client request.
Sealed class?
+
Class that cannot be inherited.
Security & Authorization
+
ASP.NET Core uses policies, role-based access, authentication middleware, and secure coding to protect resources. Best practices include HTTPS, input validation, and secure tokens.
Self-Authorization Design?
+
User automatically given access to own resources.
Self-Contained Deployment?
+
The app includes its own .NET runtime. It does not require .NET to be installed on the host machine.
Send JSON result in MVC?
+
Use return Json(object, JsonRequestBehavior.AllowGet);. This serializes the object into JSON format. Useful in AJAX-based applications. It is commonly used in API responses.
Separation of Duties?
+
Critical tasks split among multiple users.
Serialization Libraries?
+
System.Text.Json, Newtonsoft.Json.
Serialization?
+
Converting objects to byte streams, JSON, or XML.
Serilog?
+
Third-party structured logging library.
Serverless Computing?
+
Execution model where cloud runs functions without managing servers.
Server-side validation?
+
Validation performed on server during HTTP request processing.
Service Lifetimes
+
Transient, Scoped, Singleton.
Service Lifetimes?
+
Singleton, Scoped, Transient.
Session Fixation?
+
Attack that hijacks a valid session.
Session in MVC Core?
+
Stores user state data server-side while maintaining stateless nature.
Session State Management
+
Uses cookies, TempData, distributed caching, or session middleware.
Session State?
+
Server-side storage for user data.
session?
+
Server-side state management storing user data across requests.
Sessions maintained in MVC?
+
Sessions can be maintained using Session[] variables. Example: Session["User"] = "John";. ASP.NET uses server-side storage for session values. Cookies or session identifiers track user session state.
SignalR?
+
SignalR is a .NET library for real-time communication. It supports WebSockets and used for chat apps, live dashboards, and notifications.
SignalR?
+
Real-time communication framework for push notifications, chat, live updates.
SignalR?
+
Framework for real-time communication like chat, live updates.
Significance of NonActionAttribute:
+
NonActionAttribute is used in MVC to prevent a public method inside a controller from being treated as an action method. It tells the framework not to expose or invoke the method via routing. This is useful for helper or private logic inside controllers.
Singleton lifetime?
+
Service instance created once for entire application lifetime.
Singleton lifetime?
+
Single instance for the entire application lifecycle.
Singleton lifetime?
+
One instance shared across application lifetime.
Soft Delete in API?
+
Use IsDeleted filter globally.
Soft Delete?
+
Mark record as deleted instead of physically removing.
SOLID?
+
Five design principles: SRP, OCP, LSP, ISP, DIP.
Spring MVC?
+
Spring MVC is a Java-based MVC framework used to build flexible and loosely coupled web applications.
SQL Injection?
+
Attack using unsafe SQL input.
SQL Injection?
+
Security attack via malicious SQL input.
SSO?
+
Single Sign-On allows login once across multiple apps.
SSO?
+
Single Sign-On allowing one login for multiple applications.
Startup class used for?
+
Configures services and the HTTP request pipeline.
Startup.cs?
+
Startup.cs in ASP.NET Core configures the application’s services and middleware pipeline. The ConfigureServices method registers services like dependency injection, database contexts, and authentication. The Configure method sets up middleware such as routing, error handling, and static files. It defines how the app responds to HTTP requests during startup.
Startup.cs?
+
File configuring middleware, routing, authentication in MVC Core.
Statelessness?
+
Server stores no client session; each request is independent.
Static Authorization?
+
Predefined access rules.
Static class?
+
Class that cannot be instantiated.
Steps in the execution of an MVC project?
+
Request goes to the Routing Engine, which maps it to a controller and action. The controller executes the required logic and interacts with the model. A View is selected and rendered to the browser. Finally, the response is returned to the client.
stored procedures?
+
Precompiled SQL code stored in the database.
Strong naming?
+
Assigning a unique identity using public/private key pairs.
strongly typed view?
+
A view bound to a specific model class for compile-time validation.
strongly typed view?
+
A view bound to a specific model class using @model keyword.
Strongly Typed Views
+
These views are bound to a model class using @model. They improve IntelliSense, compile-time safety, and easier data handling.
Swagger/OpenAPI?
+
Tool to document and test REST APIs.
Swagger?
+
Framework to document and test APIs interactively.
Swagger?
+
Documentation and testing tool for APIs.
Swagger?
+
Auto-documentation and testing tool for APIs.
Tag Helper in ASP.NET Core?
+
Tag helpers are server-side components that enable C# code to be used in HTML elements. They make views cleaner and more readable, especially for forms, routing, and validation. Examples include asp-controller, asp-route, and asp-validation-for.
Tag Helper?
+
Server-side helpers to generate HTML in Razor views.
Tag Helper?
+
Server-side components used to generate dynamic HTML.
Tag Helpers?
+
Server-side Razor components that generate HTML in .NET Core MVC.
Task Parallel Library (TPL)?
+
Framework for parallel programming using tasks.
TempData in MVC?
+
TempData stores data temporarily and is used to pass values across requests, especially during redirects.
TempData used for?
+
Used to pass data across redirects between actions.
TempData: Stores data temporarily across redirects.
+
ViewData: Key-value store for passing data to view.
TempData?
+
Stores data for one request cycle.
TempData?
+
Stores data temporarily and persists across redirects.
the Base Class Library?
+
Reusable classes for IO, networking, collections, threading, XML, etc.
the DifBet early and late binding?
+
Early binding resolved at compile time, late binding at runtime.
the main components of .NET Framework?
+
CLR, Base Class Library, ASP.NET, ADO.NET, WPF, WCF.
Themes in ASP.NET application?
+
Themes style pages and controls consistently using CSS, skin files, and images stored in the App_Themes folder;they can be applied via Page directive, Web.config, or programmatically to maintain a uniform UI design.
Themes in ASP.NET:
+
Themes define the UI look and feel of a web application. They include styles, skins, and images. Useful for consistent branding across pages.
Threading?
+
Executing multiple tasks concurrently.
Throttling?
+
Controlling request frequency.
Token Authentication?
+
Authentication based on tokens instead of cookies.
Token Binding?
+
Crypto mechanism tying tokens to client devices.
Token Exchange?
+
Exchanging one token for another for different scopes.
Token Introspection?
+
Process of validating token on the Authorization Server.
Token Revocation?
+
Process of invalidating tokens before expiration.
Token-Based Authorization?
+
Access granted via tokens like JWT.
tracing in .NET?
+
Tracing helps debug and analyze runtime behavior. It displays request details, control hierarchy, and performance info. Tracing can be enabled at page or application level. It is useful during development for troubleshooting.
Tracking vs NoTracking?
+
AsNoTracking improves performance for reads.
Transient lifetime?
+
New instance created each time the service is requested.
Transient lifetime?
+
Creates a new instance each time requested.
Transient lifetime?
+
Creates a new instance every time requested.
Two approaches of adding constraints to a route:
+
Constraints can be added using regular expressions or built-in constraint classes like HttpMethodConstraint.
Two ways to add constraints to a route?
+
1. Using Regular Expressions. 2. Using Parameter Constraints (like int, guid). They restrict valid route patterns. Helps avoid ambiguity.
Two ways to add constraints:
+
Using Regex constraints or custom constraint classes/interfaces.
Types of ActionResult?
+
ViewResult, JsonResult, RedirectResult, FileResult, PartialViewResult, ContentResult.
Types of authentication in ASP.NET?
+
Forms, Windows, Passport, Token, Basic.
Types of Caching?
+
In-memory, Distributed, Redis, Response caching.
Types of caching?
+
Output caching, Data caching, Distributed caching.
Types of caching?
+
In-Memory Cache, Distributed Cache, Response Cache.
Types of DI lifetimes?
+
Singleton, Scoped, Transient.
Types of filters?
+
Authorization, Action, Result, and Exception filters.
Types of Filters?
+
Authorization, Action, Result, Exception filters.
Types of JIT?
+
Pre-JIT, Econo-JIT, Normal-JIT.
Types of results in MVC?
+
Common types include: ViewResult JsonResult RedirectResult ContentResult FileResult Each type corresponds to a different response format.
Types of Routing?
+
Attribute routing, Conventional routing, Minimal API routing.
Types of routing?
+
Convention-based routing and Attribute routing.
Types of Routing?
+
Convention-based and Attribute-based routing.
Types of serialization?
+
Binary, XML, SOAP, JSON.
Unboxing?
+
Extracting value type from object.
Unit of Work Pattern?
+
Manages multiple repositories under a single transaction.
Unit of Work Pattern?
+
Manages multiple repository operations under a single transaction.
Unit of Work Pattern?
+
Manages multiple repository operations under a single transaction.
Unit Testing Controllers
+
Controllers are tested using mock dependencies injected via constructor. Frameworks like Moq help simulate external services.
Unit Testing in MVC?
+
Testing controllers, models, and logic without running UI.
Unit Testing?
+
Testing individual code components.
Unmanaged Code?
+
Code executed directly by OS outside CLR like C/C++.
URI vs URL?
+
URI identifies a resource; URL locates it.
URL Rewriting Middleware
+
This middleware modifies request URLs before routing. It is useful for SEO redirects, legacy URL support, and HTTPS enforcement.
Use MVC in JSP?
+
Use Java Beans as Model, JSP as View, and Servlets as Controllers. The controller receives requests, interacts with the model, and forwards output to the view. Ensures clean separation of logic. 35) How MVC works in Spring?
Use of ActionFilters in MVC?
+
Action filters execute custom logic before or after Action methods, such as logging, caching, or authorization.
Use of CheckBox in .NET?
+
A CheckBox allows users to select one or multiple options. It returns true/false based on user selection. It can trigger events like CheckedChanged. It is widely used in forms and permissions.
Use of default route {resource}.axd/{*pathinfo}?
+
It is used to ignore requests for Web Resource files. Static resources like scripts and images are handled separately. Prevents MVC routing from processing system files. Used mainly for performance optimization.
Use of ng-controller in external files?
+
ng-controller helps load logic defined in a separate JavaScript file. This separation keeps code modular and manageable. It also promotes reusability and avoids inline scripts. Used for scalable Angular applications.
Use of UseIISIntegration?
+
Configures the app to work with IIS as a reverse proxy.
Use of ViewModel:
+
A ViewModel holds data required by the view and may combine multiple models. It improves separation of concerns.
Use repeater control in ASP.NET?
+
Repeater displays repeated data from data sources like SQL or Lists. It provides full HTML control without predefined layout. Data is bound using DataBind() method. Ideal for flexible UI formatting.
used to handle an error in MVC?
+
MVC uses Exception Filters, HandleErrorAttribute, custom error pages, and global filters to handle errors. It also supports logging frameworks for exception tracking.
Using ASP.NET Core APIs from a Class Library
+
Class libraries can reference ASP.NET Core packages and use dependency injection to access services. Shared logic like validation or domain models can be placed in the library for reuse.
Using hyperlink: Go to About.
+
MVC resolves it via routing to controller → view.
Validation in ASP.NET Core
+
Validation uses data annotations and model binding. It ensures rules are applied once and reused across views and APIs (DRY principle).
Validation in MVC?
+
Process ensuring user input meets defined rules before saving.
Various JSON files in ASP.NET Core?
+
appsettings.json, launchSettings.json, bundleconfig.json, and environment-specific config files.
Various steps to create the request object?
+
MVC parses the incoming HTTP request. It identifies route data, initializes the Controller and Action. Binding occurs to form parameters and then the request object is passed.
View Component?
+
Reusable rendering component similar to partial views but with logic.
View Engine?
+
Component that renders UI from templates.
View in MVC?
+
View is the UI representation of model data shown to the user.
View Models
+
Custom class containing only data required by the View.
View State?
+
Preserves page and control values across postbacks in ASP.NET WebForms using a hidden field.
ViewBag?
+
Dynamic data dictionary for passing data from controller to view.
ViewData: Key-value store for passing data to view.
+
ViewBag: Dynamic wrapper around ViewData.
ViewData?
+
A dictionary-based container to pass data between controller and view.
ViewEngineResult?
+
Represents result of view engine locating view or partial.
ViewEngines?
+
Engines that compile and render views like RazorViewEngine.
ViewImports.cshtml?
+
Registers namespaces, helpers, and tag helpers for Razor views.
ViewModel?
+
A class combining multiple models or additional data required by the view.
ViewStart.cshtml?
+
Executes before every view and sets layout page.
ViewStart?
+
_ViewStart.cshtml runs before each view and sets common settings like layout. It helps avoid repeating configuration in each view.
ViewState?
+
Mechanism to persist page and control values in Web Forms.
ViewState?
+
A mechanism in ASP.NET WebForms to preserve page and control state across postbacks.
WCF bindings?
+
Transport protocols like basicHttpBinding, wsHttpBinding.
WCF?
+
Windows Communication Foundation for building service-oriented apps.
Web API in ASP.NET Core?
+
Framework for building RESTful services.
Web API in ASP.NET?
+
ASP.NET Web API is used to build RESTful services. It supports formats like JSON and XML. It enables communication between client and server applications. Web API is lightweight and ideal for mobile and SPA applications.
Web API vs MVC?
+
MVC returns views while Web API returns JSON/XML data.
Web API?
+
Web API is used to build RESTful HTTP services in .NET. It supports JSON, XML, routing, authentication, and stateless communication.
Web API?
+
A framework for building RESTful services over HTTP in ASP.NET.
Web Farm?
+
Multiple servers hosting the same application.
Web Garden?
+
Multiple worker processes in same application pool.
Web Services in ASP.NET?
+
HTTP-based services using XML/SOAP for cross-platform communication (.asmx files). They use XML and SOAP protocols for data exchange. They help build interoperable solutions across platforms. ASP.NET Web Services expose methods using [.asmx] files.
Web.config file in ASP?
+
Web.config is an XML configuration file for ASP.NET applications. It stores settings like database connections, security, and session management. It controls application-level behavior without recompiling code. Multiple Web.config files can exist for different directories.
Web.config?
+
Configuration file for ASP.NET application.
Web.config?
+
Configuration file for ASP.NET applications in .NET Framework.
Web.config?
+
Configuration file used in .NET MVC Framework applications.
WebListener?
+
A Windows-only web server used when advanced Windows authentication features are required.
WebParts:
+
WebParts allow building customizable and personalized pages. Users can rearrange, edit, or hide parts of a page. Useful in dashboards and portal applications.
WebSocket?
+
Persistent full-duplex communication protocol for real-time applications.
WebSocket?
+
Persistent full-duplex connection used in real-time communication.
Where Startup.cs in ASP.NET Core 6.0?
+
In .NET 6+, minimal hosting model removes Startup.cs. Configuration like services, routing, and middleware is now placed directly in Program.cs.
Why are API keys less secure?
+
No expiration and easily leaked.
Why choose .NET for development?
+
.NET provides high performance, strong ecosystem, cross-platform support, built-in DI, cloud readiness, and great tooling like Visual Studio and GitHub Copilot. It's ideal for enterprise, web, mobile, and microservice applications.
Why do Access Tokens expire?
+
To reduce security risks and limit exposed lifetime.
Why not store authorization logic in UI?
+
Client-side can be tampered; authorization must be server-side.
Why use ASP.NET Core?
+
Fast, scalable, cloud-ready, open-source, modular design, and ideal for Microservices and container deployments.
Why validate authorization on every request?
+
To ensure permissions haven't changed.
Windows Authentication?
+
Uses Windows credentials for login.
Windows Authorization?
+
Authorization using Windows identity and AD groups.
Worker Services?
+
Worker Services run background jobs without UI. They are ideal for scheduled tasks, queue processing, and microservice background jobs.
WPF MVVM Pattern?
+
Model-View-ViewModel for UI separation.
WPF?
+
Windows Presentation Foundation for building rich desktop UIs.
wroot folder in ASP.NET Core?
+
Public web root for static files (CSS, JS, images); files outside are not directly accessible.
XACML?
+
Authorization standard using XML-based policies.
XAML?
+
Markup language used to define UI elements in WPF.
XSS Prevention
+
XSS occurs when user input is executed as script. ASP.NET Core prevents this through automatic HTML encoding and validation.
XSS?
+
Cross-site scripting via malicious scripts.
Zero Trust?
+
Always verify identity regardless of network.

Draw.io / Lucidchart

+
Benefit of using cloud-based diagram tools?
+
No installation required, supports remote collaboration, version history, and easy sharing.
Can draw.io integrate with jira or confluence?
+
Yes, via plugins, Draw.io diagrams can be embedded in Jira issues and Confluence pages for collaborative documentation.
Diffbet draw.io and lucidchart?
+
Draw.io is free and open-source; Lucidchart is paid with advanced collaboration, templates, and integration features.
Draw.io?
+
Draw.io is a free web-based diagramming tool for flowcharts, org charts, network, and architecture diagrams.
Lucidchart?
+
Lucidchart is a cloud-based diagramming tool similar to Visio, with collaboration, real-time editing, and integration with apps like Google Workspace.
Shape formatting in draw.io or lucidchart?
+
Shapes can be customized with colors, borders, shadows, and labels to improve clarity and visual hierarchy.
To collaborate in lucidchart?
+
Real-time editing, commenting, and version control allow multiple users to work together on diagrams.
To export diagrams in draw.io?
+
Diagrams can be exported as PNG, JPG, PDF, SVG, or VSDX for offline use.
You link diagrams to live data?
+
Some tools allow linking shapes to data sources like Google Sheets, Excel, or databases to reflect dynamic information.
You maintain version history in lucidchart?
+
Lucidchart automatically tracks changes; you can restore or view previous versions via the revision history panel.

Engineer to Architect

+
Engineer → Architect: Key Topics to Master
+
  1. Core Engineering Excellence
    • Data structures & algorithms
    • Clean code, design principles (SOLID, DRY, KISS)
    • Debugging & performance tuning
  2. System Design
    • High-level architecture patterns
    • Scalability, availability, reliability
    • Load balancing, caching, sharding
    • CAP theorem & distributed systems
  3. Architecture Patterns
    • Monolith vs Microservices
    • Event-driven architecture
    • Layered, Hexagonal, Clean Architecture
    • SOA, CQRS, Saga
  4. Cloud & Infrastructure
    • AWS / Azure / GCP fundamentals
    • Containers (Docker) & orchestration (Kubernetes)
    • CI/CD pipelines
    • IaC (Terraform, ARM, CloudFormation)
  5. Security & Compliance
    • Authentication & Authorization
    • OAuth, SSO, JWT
    • OWASP Top 10
    • Data protection & compliance (GDPR, SOC2, ISO)
  6. Data & Integration
    • SQL vs NoSQL
    • Data modeling
    • Message brokers (Kafka, RabbitMQ)
    • API design (REST, GraphQL)
  7. Non-Functional Requirements
    • Performance
    • Scalability
    • Maintainability
    • Observability (logging, monitoring, tracing)
  8. Business & Domain Understanding
    • Translating business needs into technical solutions
    • Cost optimization
    • ROI-driven design
  9. Leadership & Communication
    • Technical documentation
    • Architecture diagrams
    • Stakeholder communication
    • Mentoring engineers
  10. Decision Making
    • Trade-off analysis
    • Build vs Buy
    • Technology evaluation
    • Risk assessment
Must-Know System Design Topics to Crack Your Next Interview
+

System design interviews can be daunting, but with the right preparation, you can confidently tackle even the most challenging questions. This guide focuses on the most critical system design topics to help you build scalable, resilient, and efficient systems. Whether you're designing for millions of users or preparing for your dream job, mastering these areas will give you the edge you need.

1. APIs (Application Programming Interfaces)

APIs are the backbone of communication between systems and applications, enabling seamless integration and data sharing. Designing robust APIs is critical for building scalable and maintainable systems.

Key Topics to Focus On:

  • REST vs GraphQL: Understand when to use REST (simplicity, caching) versus GraphQL (flexibility, reduced over-fetching).
  • API Versioning: Learn strategies for maintaining backward compatibility while rolling out new features.
  • Authentication & Authorization: Implement secure practices using OAuth2, API keys, and JWT tokens.
  • Rate Limiting: Prevent abuse by controlling the number of API calls using strategies like token bucket or quota systems.
  • Pagination: Handle large datasets efficiently with offset, cursor-based, or keyset pagination.
  • Idempotency: Design APIs to safely handle retries without unintended side effects.
  • Monitoring and Logging: Implement tools for tracking API performance, errors, and usage.
  • API Gateways: Explore tools like Kong, Apigee, or AWS API Gateway to manage APIs at scale, including traffic routing, throttling, and caching.

2. Load Balancer

A load balancer ensures high availability and scalability in distributed systems by distributing traffic across multiple servers. Mastering load balancers will help you design resilient systems.

Key Topics to Focus On:

  • Types of Load Balancers: Understand Application Layer (L7) and Network Layer (L4) load balancers and their specific use cases. Application load balancers are suited for HTTP traffic and can route based on content, while network load balancers are faster and operate at the connection level.
  • Algorithms: Familiarize yourself with common algorithms like Round Robin (evenly distributes requests), Least Connections (sends requests to the server with the fewest active connections), and IP Hashing (routes requests based on client IP).
  • Health Checks: Learn how to monitor server availability using ping, HTTP checks, or custom scripts, and reroute traffic from unhealthy servers to healthy ones.
  • Sticky Sessions: Explore how to maintain user session consistency by tying sessions to specific servers, using cookies or server configurations.
  • Scaling Strategies: Differentiate between horizontal scaling (adding more servers to the pool) and vertical scaling (adding more resources to an existing server). Explore auto-scaling techniques and thresholds.
  • Global Load Balancers: Manage traffic across multiple regions with DNS-based routing, latency-based routing, and failover mechanisms.
  • Reverse Proxy: Understand its gateway functionality, including caching, SSL termination, and security benefits such as hiding internal server details.

3. Database (SQL vs NoSQL)

Database design and optimization are crucial in system design. Knowing how to choose and scale databases is vital.

Key Topics to Focus On:

  • SQL vs NoSQL: Understand differences in schema design, query languages, and scalability. SQL databases (MySQL, PostgreSQL) offer strong ACID compliance, while NoSQL databases (MongoDB, Cassandra) provide flexibility and are better for unstructured data.
  • Sharding & Partitioning: Learn techniques for distributing data, such as range-based, hash-based, and directory-based partitioning, and how to implement them.
  • Replication: Study setups like Primary-Secondary (read replicas) and Multi-Master (for high write availability) replication and their trade-offs.
  • Consistency Models: Dive into Strong Consistency (all nodes agree on data updates immediately) vs Eventual Consistency (updates propagate over time). Understand CAP theorem’s implications.
  • Indexing: Optimize database queries with proper indexing strategies (single-column, composite, or full-text indexing) to speed up lookups.
  • Caching: Accelerate read operations with external caching layers (Redis or Memcached) and explore read-through and write-back caching strategies.
  • Backup & Recovery: Plan failover mechanisms with hot backups, cold backups, and snapshot-based recovery to ensure data availability.

4. Application Server

The application server is the backbone of modern distributed systems. Its ability to handle client requests and business logic is critical to system performance and reliability.

Key Topics to Focus On:

  • Stateless vs Stateful Architecture: Learn trade-offs between stateless systems (easier scaling, no session dependency) and stateful systems (session persistence but complex scaling).
  • Caching Mechanisms: Compare in-memory solutions like Redis (supports data structures and persistence) and Memcached (simple key-value store) against local caching for reducing database load.
  • Session Management: Analyze the pros and cons of cookies (state stored on the client) versus JWT tokens (self-contained, scalable, and stateless session management).
  • Concurrency: Understand threading models, thread pools, and async handling (using async/await or event-driven frameworks) to handle high concurrent requests.
  • Microservices Architecture: Delve into service discovery mechanisms like Consul and Eureka, inter-service communication patterns (REST, gRPC, or message brokers), and resiliency patterns like circuit breakers.
  • Containerisation: Explore Docker for lightweight application containers and Kubernetes for orchestrating deployments, scaling, and updates in microservices.
  • Rate Limiting: Implement strategies such as token bucket or leaky bucket algorithms to manage traffic, prevent abuse, and ensure fair usage.

5. Pub-Sub or Producer-Consumer Patterns

Messaging systems enable communication in distributed environments. Understanding these patterns is essential for designing event-driven architectures.

Key Topics to Focus On:

  • Messaging Patterns: Differentiate between Pub-Sub (one-to-many communication) and Queue-based (one-to-one communication) systems for real-time vs batch processing.
  • Message Brokers: Compare Kafka (distributed, durable, and scalable), RabbitMQ (lightweight and supports complex routing), and AWS SQS/SNS (managed solutions).
  • Idempotency: Ensure reliable processing by avoiding duplicate operations using unique identifiers or deduplication logic.
  • Durability & Ordering: Learn about persistent storage of messages for durability and how brokers like Kafka maintain message order.
  • Dead Letter Queues: Use DLQs to store messages that fail after maximum retries for debugging and reprocessing.
  • Scaling: Implement consumer groups in Kafka or parallel consumers in RabbitMQ for processing high-throughput messages.
  • Eventual Consistency: Design patterns for asynchronous updates while maintaining consistency across distributed systems.

6. Content Delivery Network (CDN)

CDNs optimize content delivery by reducing latency and improving load times for users across the globe.

Key Topics to Focus On:

  • Basics of CDNs: Understand how edge caching reduces latency and enhances user experience by delivering content from servers closer to the user.
  • Caching Policies: Study TTL (Time-To-Live) settings for cached objects and how to handle content invalidation for updates.
  • Geolocation Routing: Deliver content from the nearest data centre for speed and efficiency using geolocation-based routing.
  • Static vs Dynamic Content: Optimise delivery for static content (images, videos, scripts) using caching and learn techniques to accelerate dynamic content delivery.
  • SSL/TLS: Ensure secure communication by offloading SSL termination to CDNs and supporting modern protocols like HTTP/2.
  • Load Handling: Handle traffic spikes gracefully with CDN’s elastic scaling capabilities.
  • DDoS Protection: Protect your system from volumetric attacks with CDN’s built-in security features like rate limiting, bot filtering, and WAF (Web Application Firewall).

Conclusion

System design is not just about building software; it’s about crafting experiences that are scalable, reliable, and delightful for users. The topics outlined here are prioritized to help you focus on the most impactful areas first. Dive deep into these concepts, practice applying them to real-world scenarios, and you’ll be well-equipped to ace your interviews and design systems that stand the test of time.

Instagram System Design: The Blueprint to Crack FAANG Interviews
+

🚀 Intro: Why Instagram’s system design is worth studying

Instagram isn’t just a photo-sharing app. It’s a hyper-scale social network, serving:

  • Over 2 billion users monthly,
  • Hundreds of millions of posts daily,
  • Billions of feed views, likes, comments, and stories each day.

Yet it remains lightning fast and almost always available, even under massive load.

Studying Instagram’s architecture gives you practical lessons on:

How to architect for extreme read/write scalability (through fan-out, caching, sharding).
How to balance consistency vs performance for feeds & notifications.
How to use asynchronous pipelines to keep user experience smooth, offloading heavy tasks like video processing.
How CDNs and edge caching slash latency and costs.

It’s a masterclass in building resilient, high-throughput, low-latency distributed systems.

📌 1. Requirements & Estimations

Functional Requirements

  • Users should be able to sign up, log in, and maintain profiles.
  • Users can upload photos & videos with captions.
  • Users can follow/unfollow other users.
  • Users should see a personalized feed of posts from accounts they follow, ranked by relevance.
  • Users can like, comment, and share posts.
  • Users can view ephemeral stories, disappearing after 24 hours.
  • Notifications for likes/comments/follows.

🚀 Non-Functional Requirements

  • High availability: Instagram can’t afford downtime; target 99.99%.
  • Low latency: Feed loads in under 200ms globally.
  • Scalability: System should handle hundreds of millions of DAUs generating billions of reads and writes daily.
  • Eventual consistency: It’s acceptable for a slight delay in seeing new posts or likes.
  • Durability: No data loss on photos/videos.

📊 Estimations & Capacity Planning

Let’s break this down using realistic assumptions to size our system.

📅 Daily Active Users (DAUs)

  • Assume 500 million DAUs.

📷 Posts

  • Average 1 photo/video post per user per day.
  •  500M posts/day.

📰 Feed Reads

  • Assume each user opens the app 10 times/day.
  • Each time loads the feed.

 5 billion feed reads/day.

💬 Likes & Comments

  • Each user likes 20 posts/day and comments 2 times/day.

 10 billion likes/day1 billion comments/day.

💾 Storage

  • Average photo = 500 KB, video = 5 MB (average across formats).
  • If 70% are photos, 30% are short videos, blended avg ≈ 1.5 MB/post.

 500M posts/day × 1.5MB = 750 TB/day

  • Retained indefinitely = petabytes scale storage.

🔥 Throughput

  • Write-heavy ops:
    • 500M posts/day 6,000 writes/sec.
    • 10B likes/day 115,000 writes/sec.
  • Read-heavy ops:
    • 5B feed reads/day 58,000 reads/sec.

Peak hour traffic typically 3x average, so we design for:

  • ~20,000 writes/sec for posts
  • ~350,000 writes/sec for likes/comments
  • ~175,000 feed reads/sec.

🔍 Derived requirements

ResourceEstimated LoadPosts DB6K writes/sec, PB-scale storageFeed service175K reads/secLikes/comments DB350K writes/sec, heavy fan-outsMedia store~750 TB/day ingest, geo-cachedNotifications~100K events/sec on Kafka

🚀 2. API Design

Instagram is essentially a social network with heavy content feed, so most APIs revolve around:

  • User management
  • Posting content
  • Fetching feeds
  • Likes & comments
  • Stories
  • Notifications

Below, we’ll design REST-like APIs, though in production Instagram also uses GraphQL for flexible client-driven queries.

🔐 Authentication APIs

POST /signup

Register a new user.

json

CopyEdit

{ "username": "rocky.b", "email": "rocky@example.com", "password": "securepassword" }

Returns:

json

CopyEdit

{ "user_id": "12345", "token": "JWT_TOKEN" }

POST /login

Authenticate user, return JWT session.

json

CopyEdit

{ "username": "rocky.b", "password": "securepassword" }

Returns:

json

CopyEdit

{ "token": "JWT_TOKEN", "expires_in": 3600 }

👤 User profile APIs

GET /users/{username}

Fetch public profile info.
Returns:

json

CopyEdit

{ "user_id": "12345", "username": "rocky.b", "bio": "Tech + Systems.", "followers_count": 450, "following_count": 200, "profile_pic_url": "https://cdn.instagram.com/..." }

POST /users/{username}/follow

Follow or unfollow user.

json

CopyEdit

{ "action": "follow" // or "unfollow" }

Returns: HTTP 200 or error.

📷 Post APIs

POST /posts

Create a new photo/video post.
(Multipart upload — image/video, plus JSON metadata)

json

CopyEdit

{ "caption": "Building systems is fun", "tags": ["systemdesign", "ai"] }

Returns:

json

CopyEdit

{ "post_id": "67890" }

GET /posts/{post_id}

Fetch a single post.

json

CopyEdit

{ "post_id": "67890", "user": {...}, "media_url": "...", "caption": "...", "likes_count": 1530, "comments_count": 55, "created_at": "2025-07-03T12:00:00Z" }

POST /posts/{post_id}/like

Like/unlike a post.

json

CopyEdit

{ "action": "like" }

Returns: HTTP 200.

GET /posts/{post_id}/comments

Fetch comments on a post.
Returns:

json

CopyEdit

[ { "user": {...}, "text": "Awesome!", "created_at": "2025-07-03T12:30:00Z" }, ... ]

📰 Feed APIs

GET /feed

Personalized feed for current user.

  • Could support ?limit=20&after_cursor=... for pagination.

Returns:

json

CopyEdit

[ { "post_id": "67890", "user": {...}, "media_url": "...", "caption": "...", "likes_count": 1530, "comments_count": 55, "created_at": "2025-07-03T12:00:00Z" }, ... ]

🕒 Stories APIs

POST /stories

Upload a story (ephemeral).

json

CopyEdit

{ "media_url": "...", "expires_in": 86400 }

GET /stories

Get stories from people the user follows.

🔔 Notification APIs

GET /notifications

List user notifications (likes, comments, follows).
Returns:

json

CopyEdit

[ { "type": "like", "by_user": {...}, "post_id": "67890", "created_at": "2025-07-03T13:00:00Z" }, ... ]

⚖️ Design considerations

  • Use JWT or OAuth tokens for auth.
  • Rate limit per IP/user on all write endpoints to prevent spam (e.g. max 10 likes/sec).
  • GraphQL alternative:
    Instagram uses GraphQL heavily for clients to fetch exactly what fields they need in feed or profile views — reduces over-fetching and allows mobile flexibility.

🗄️ 3. Database Schema & Indexing

⚙️ Core strategy

Instagram is read-heavy, but also requires huge write throughput (posting, likes, comments) and needs efficient fan-out for feeds.

  • Primary data store: Sharded Relational DB (like MySQL) for user, post, comment data.
  • Secondary data store: Wide-column store (like Cassandra) for timelines & feeds (optimized for fast reads).
  • Specialized indexes: ElasticSearch for search, plus Redis for hot caching.

📜 Key Tables & Schemas

👤 users table

ColumnTypeNotesuser_idBIGINT PKSharded by consistent hashusernameVARCHARUNIQUE, indexedemailVARCHARUNIQUE, indexedpassword_hashVARCHARStored securelybioTEXTprofile_picVARCHARURL to blob storecreated_atDATETIME

Indexes:

  • UNIQUE INDEX username_idx (username)
  • UNIQUE INDEX email_idx (email)

📷 posts table

ColumnTypeNotespost_idBIGINT PKuser_idBIGINTIndexed, for author lookupscaptionTEXTmedia_urlVARCHARPoints to blob storagemedia_typeENUM(photo, video)created_atDATETIME

Indexes:

  • INDEX user_posts_idx (user_id, created_at DESC) for user profile pages.

💬 comments table

ColumnTypeNotescomment_idBIGINT PKpost_idBIGINTIndexeduser_idBIGINTCommentertextTEXTcreated_atDATETIME

Indexes:

  • INDEX post_comments_idx (post_id, created_at ASC)

❤️ likes table

ColumnTypeNotespost_idBIGINTuser_idBIGINTWho likedcreated_atDATETIME

PK: (post_id, user_id) (so no duplicate likes)
Secondary:

  • INDEX user_likes_idx (user_id)

👥 followers table

ColumnTypeNotesuser_idBIGINTThe user being followedfollower_idBIGINTWho follows themcreated_atDATETIME

PK: (user_id, follower_id)
Secondary:

  • INDEX follower_idx (follower_id)

This helps:

  • Find who a user follows (WHERE follower_id = X)
  • Or who follows a user (WHERE user_id = Y)

📰 feed_timeline table (Wide-column DB like Cassandra)

This is precomputed for fast feed reads.

Partition KeyClustering ColumnsValuesuser_idcreated_at DESCpost_id

This design:

  • Partition by user_id to keep all a user’s feed together.
  • Cluster by created_at DESC to allow efficient paging.

Fetching feed =

sql

CopyEdit

SELECT post_id FROM feed_timeline WHERE user_id = 12345 ORDER BY created_at DESC LIMIT 20;

🔔 notifications table

ColumnTypeNotesnotif_idBIGINT PKuser_idBIGINTWho receives this notiftypeENUM(like, comment, follow)by_user_idBIGINTWho triggered the notifpost_idBIGINT NULLFor post contextcreated_atDATETIME

Index:

  • INDEX user_notif_idx (user_id, created_at DESC)

📂 Special indexing considerations

 Sharding:

  • Users, posts, comments tables are sharded by user_id using consistent hashing.
  • Ensures balanced distribution & avoids hot spots.

 Follower relationships:

  • Indexed both by user_id and follower_id to support both “who do I follow” and “who follows me” efficiently.

 Feed timelines:

  • Stored in Cassandra for high-volume writes and fast sequential reads.

 ElasticSearch:

  • Separate index on username, hashtags, captions for full-text & partial matching.

 Hot caches:

  • Redis stores pre-rendered user profiles & top feed pages for milliseconds-level reads.

🏗️ 4. High-Level Architecture (Explained)

🔗 1. DNS & Client

  • When you open the Instagram app or website, it resolves the DNS to find the closest Instagram server cluster.
  • It uses Geo DNS to route your request to the nearest data center, improving latency.

⚖️ 2. Load Balancer

  • The load balancer receives incoming HTTP(S) requests from clients.
  • Distributes them to multiple API Gateways, ensuring:
    • No single server is overwhelmed.
    • Requests are routed efficiently to regions with capacity.

🚪 3. API Gateway

  • Instagram typically runs multiple API Gateways, separating concerns:
    • API Gateway 1: optimized for read-heavy traffic (feeds, comments, likes counts, profile views).
    • API Gateway 2: optimized for write-heavy traffic (posting, likes, comments inserts).
  • API Gateways handle:
    • Authentication (JWT tokens or OAuth).
    • Basic rate limiting.
    • Request validation & routing.

🚀 4. App Servers

App Server (Read)

  • Handles:
    • Fetching user feeds (list of posts).
    • Getting comments on a post.
    • Loading user profiles.
  • Talks to:
    • Metadata DB to fetch structured data.
    • Cache layer for ultra-low-latency fetches.
    • Search systems for queries.

App Server (Write)

  • Handles:
    • New posts, likes, comments, follows.
  • Publishes tasks to:
    • Feed Generation Queue (to fan out posts to followers).
    • Video Processing Queue (for transcoding media).

📝 5. Cache Layer

  • Uses Redis or Memcached clusters to speed up reads.
  • Examples:
    • feed:user:1234 → cached list of post IDs for the feed.
    • profile:rocky.b → cached profile metadata.
  • Also used for search hot results caching.

🗄️ 6. Metadata Databases

  • Typically sharded MySQL or PostgreSQL clusters.
  • Directory Based Partitioning: users are partitioned by a consistent hash of user_id to evenly distribute load.
  • Stores:
    • Users, posts, comments, followers data.
  • Managed by a Shard Manager service that maps user_id -> DB shard.

🔍 7. Search Index & Aggregators

  • Uses ElasticSearch for:
    • Username lookups.
    • Hashtag queries.
    • Trending discovery.
  • Separate search aggregators fetch results from multiple shards and combine.

📺 8. Media (Blob Storage & Processing)

  • Photos & videos are uploaded to Blob Storage (like S3, Google Cloud Storage, or Instagram’s own blob infra).
  • Processed by Video/Image Processing Service:
    • Generates multiple resolutions.
    • Extracts thumbnails.
    • Watermarking or tagging (if required).
  • Processing is done asynchronously by a pool of workers, consuming from the Video Processing Queue.

📰 9. Feed Generation Service

  • New posts are published to the Feed Generation Queue.
  • Feed workers pick these up, update follower timelines in the database or cache.
  • Ensures that when followers open their feed, new posts are already visible.

🔔 10. Notification Service

  • Likes, comments, follows generate events to the Notification Queue.
  • Notification workers consume these, write to a notifications table.
  • Also sends real-time push notifications via APNs / FCM.

🌍 11. CDN

  • All static assets (images, videos, CSS/JS for web) are served via a Content Delivery Network (CDN).
  • Ensures global users fetch media from the nearest edge server.

🔁 12. Retry & Resilience Loops

  • Most queues have built-in retry for failed tasks.
  • Periodic health checks, circuit breakers on downstream services to maintain reliability.

That’s the complete high-level architecture breakdown, directly aligned to your diagram, explained in the same stepwise style you’d see on systemdesign.one.

📰 5. Detailed Feed Generation Pipeline & Fan-out vs Fan-in

🚀 Why is this hard?

Instagram’s feed is arguably the most demanding feature in their architecture:

  • It must support billions of reads/day, each personalized.
  • Also support hundreds of millions of new posts/day that must appear in followers’ feeds almost instantly.

Doing this with strong consistency would overwhelm the system. So Instagram engineers carefully balance consistency, freshness, latency, and cost.

⚙️ Fan-out vs Fan-in

🔄 Fan-out on write

What:

  • When a user posts, the system immediately pushes a reference of that post into all followers’ feed timelines (like inserting into feed_timeline wide-column table).

Pros:
Extremely fast feed reads each users timeline is prebuilt.
No need to join multiple tables at read time.

Cons:
Massive write amplification. A post by a celebrity with 100M followers = 100M writes.
Slower writes.
Risk of burst load on feed DB.

🔍 Fan-in on read

What:

  • When a user opens their feed, the app dynamically queries all people they follow and aggregates their posts.

Pros:
Simple writes just insert one post record.
No write amplification.

Cons:
Slow feed reads (lots of joins across many partitions).
Hard to rank or apply ML scoring across distributed data.

🚀 Hybrid approach (what Instagram uses)

  • Fan-out on write for typical users.
    • When you post, it writes references into ~500-1000 followers’ feed timelines.
    • Ensures reads are lightning fast.
  • Fan-in on read for celebrities & large accounts.
    • For example, a post from an account with 100M followers isn’t fanned out.
    • Instead, when a user opens their feed, the system dynamically pulls these “hot posts” and merges.

This balances the write load and avoids explosion of writes for massive accounts.

🏗️ Feed Generation Pipeline (Step-by-Step)

1️ Post is created

  • User makes a new post → hits Write App Server → inserts into posts table.
  • Simultaneously, a Kafka event is published:

CopyEdit

  • { user_id, post_id, created_at }

2️ Feed Generation Queue

  • This Kafka message is picked by Feed Generation Service.
  • Looks up the followers list from followers table (can be sharded, cached).

3️ Writes to Feed Timeline

  • For normal users:
    • Feed service writes small records to feed_timeline table for each follower:

makefile

  •  
    • CopyEdit

user_id: Follower1 -> post_id, created_at user_id: Follower2 -> post_id, created_at ...

  • This populates the feed ahead of time.
  • For large accounts:
    • Simply marks the post as “hot,” skips massive fan-out.

4️ Caching & Ranking

  • Each user’s feed (say top 100 posts) is cached in Redis:

makefile

  • CopyEdit

feed:user:12345 -> [post_id1, post_id2, ...]

  • Cache may include precomputed ML scores or sort order.
  • When a user opens the app, it pulls from this cache, reducing DB hits.

5️ Feed API response

  • GET /feed fetches post IDs from cache.
  • App Server then batches lookups to posts table to retrieve media & captions.
  • Also merges with hot celebrity posts pulled via on-demand fan-in.

🧠 Re-ranking with ML

  • Instagram doesn’t just show chronological.
  • They use a lightweight ML model at request time to adjust order:
    • Your past interactions
    • Freshness
    • Content type preferences

This final sort happens in-memory before the feed is returned.

⚖️ Trade-offs & safeguards

StrategyProsConsFan-outFast readsHeavy writesFan-inLight writesSlow reads for many followsHybridBalancedMore infra complexity

  • To prevent cache stampedes, they use randomized TTLs on Redis keys.
  • For celebrity posts, they often appear slightly delayed vs normal posts, to maintain system stability.

🎥 6. Media Handling & CDN Strategy

🌐 Why this matters

Instagram’s value is visual content. Images & videos drive engagement, but they also create huge challenges:

  • Massive volume: Hundreds of millions of photos/videos uploaded daily.
  • Latency: Users expect instant uploads & quick playback.
  • Bandwidth & device constraints: Must work on 2G in India as well as 5G in the US.
  • Cost: Optimizing storage & delivery saves millions.

So Instagram uses a carefully architected asynchronous pipeline with multi-tiered storage & CDN caching.

🚀 Image/Video Upload Pipeline

1️ Upload initiation

  • When you select an image/video and hit post:
    • The client generates thumbnails locally (for immediate UI feedback).
    • Makes a POST /posts API call with caption, tags, etc.

2️ Direct upload to blob store

  • Instead of routing large files through app servers (which would choke them), Instagram gives the client a pre-signed URL (e.g. from S3 or internal blob system).
  • Client uploads directly to blob store.

This bypasses API server bandwidth constraints.

3️ Metadata record creation

  • Once the upload is complete, the client notifies Instagram (via API).
  • App server then creates a record in the posts table:

less

  • CopyEdit

post_id | user_id | caption | media_url | created_at

  • Media is initially marked as processing.

🏗️ 4️ Asynchronous transcoding

  • Kafka event (or similar queue) is published:

CopyEdit

  • { post_id, media_url, media_type }
  • Video/Image Processing Service picks up the task:
    • Generates multiple resolutions & bitrates:
      • 1080p, 720p, 480p for video
      • Low/medium/high for images
    • Extracts key frames, creates preview thumbnails.
    • Runs compression pipelines to reduce size.
  • Final files are stored back in blob storage.

5️ Media URL replacement

  • Once transcoding is complete, the service updates the posts DB row to:
    • Set status = ready.
    • Insert links to processed files.
  • Feed service & client now serve these optimized URLs.

🗄️ Blob Storage & Lifecycle

Storage architecture

  • Uses hot + cold blob storage tiers to balance speed & cost.

TierUseExampleHotRecent uploads, frequent accessSSD-backed S3 / internal hot tierColdOlder content, less accessedGlacier / internal cold blob infra

  • Periodic background jobs migrate old posts to cold tier.

Durability

  • Instagram ensures 11 9s durability (99.999999999%) by replicating across availability zones.
  • Metadata DB always stores references to all media files.

🌍 Global CDN Strategy

Why use CDN?

  • Users in India shouldn’t have to fetch images from the US.
  • CDN caches content near users, reducing latency & ISP transit costs.

Typical flow

  • When client requests an image/video URL, it hits the CDN first (like Akamai, Fastly, or Meta’s own edge servers).
  • If content is cached on edge, served instantly (50-100ms).
  • If not cached (cache miss), edge pulls from blob storage, caches it for next users.

Cache tuning

  • Instagram uses variable TTLs:
    • Popular stories: 1-2 mins
    • Feed posts: 1 hour
    • Profile pictures: 24 hours
  • Hot content gets pinned on edge nodes to survive TTL expiration.

Adaptive delivery

  • CDN or client decides what resolution to fetch based on:
    • Screen size
    • Network quality (4G vs 2G)
  • Instagram also employs lazy loading & progressive JPEGs for feed scrolls.

🛡️ Safeguards & costs

  • Upload services throttle large video uploads to protect processing pipeline.
  • Blobs are encrypted at rest + in transit (TLS).
  • Using CDN reduces origin traffic by 90-95%, massively cutting blob storage egress costs

🏆 Summary: How it all comes together

At its core, Instagram solves a deceptively hard problem:

“How do you deliver personalized, fresh visual content to billions of people in under 200ms, without exploding your infrastructure costs?”

Their solution is an elegant composition of proven patterns:

 Microservices split by read & write loads, with API gateways optimized for different traffic.
 Sharded relational DBs for core data (users, posts, comments), and wide-column DBs (like Cassandra) for precomputed feed timelines.
 Redis & Memcached to serve hot feeds & profiles in milliseconds.
 Kafka + async workers for decoupling heavy operations like fan-outs & video processing.
 Blob storage + CDN to make sure photos & videos load instantly, anywhere.
ML-based ranking pipelines that personalize feeds on the fly.

All glued together with robust monitoring, auto-retries, and chaos testing to ensure resilience.

Inside Netflix’s Architecture: How It Handles Billions of Views Seamlessly
+

Netflix is a prime example of a highly scalable and resilient distributed system. With over 260 million subscribers globally, Netflix streams content to millions of devices, ensuring low latency, high availability, and seamless user experience. But how does Netflix achieve this at such an enormous scale? Let’s dive deep into its architecture, breaking down the key technologies and design choices that power the world’s largest streaming platform.

1. Microservices and Distributed System Design

Netflix follows a microservices-based architecture, where independent services handle different functionalities, such as:

  • User Authentication – Validates and manages user accounts, including password resets, MFA, and session management.
  • Content Discovery – Powers search, recommendations, and personalized content using real-time machine learning models.
  • Streaming Service – Manages video delivery, adaptive bitrate streaming, and content buffering to ensure smooth playback.
  • Billing and Payments – Handles subscriptions, regional pricing adjustments, and fraud detection.

Each microservice runs independently and communicates via APIs, ensuring high availability and scalability. This architecture allows Netflix to roll out updates seamlessly, preventing single points of failure from affecting the entire system.

Why Microservices?

  • Scalability: Each service scales independently based on demand.
  • Resilience: Failures in one service do not bring down the entire system.
  • Rapid Development: Teams can work on different services simultaneously without dependencies slowing them down.
  • Global Distribution: Services are deployed across multiple AWS regions to reduce latency.

2. Netflix’s Cloud Infrastructure – AWS at Scale

Netflix operates entirely on Amazon Web Services (AWS), leveraging the cloud for elasticity and reliability. Some key AWS services powering Netflix include:

  • EC2 (Elastic Compute Cloud): Provides scalable virtual machines for compute-heavy tasks like encoding and data processing.
  • S3 (Simple Storage Service): Stores video assets, user profiles, logs, and metadata.
  • DynamoDB & Cassandra: NoSQL databases for storing user preferences, watch history, and metadata, ensuring low-latency reads and writes.
  • AWS Lambda: Runs serverless functions for lightweight, event-driven tasks such as real-time analytics and log processing.
  • Elastic Load Balancing (ELB): Distributes incoming traffic efficiently across multiple microservices and prevents overload.
  • Kinesis & Kafka: Event streaming platforms for real-time data ingestion, powering features like personalized recommendations and A/B testing.

Netflix’s cloud-native approach allows it to rapidly scale during peak traffic (e.g., when a new show drops) and ensures automatic failover in case of infrastructure issues.

3. Content Delivery at Scale – Open Connect

A core challenge for Netflix is streaming high-quality video to users without buffering or delays. To solve this, Netflix built its own Content Delivery Network (CDN) called Open Connect. Instead of relying on third-party CDNs, Netflix places cache servers (Open Connect Appliances) in ISPs’ data centers, bringing content closer to users.

Benefits of Open Connect:

  • Lower Latency: Content is streamed from local ISP servers rather than distant cloud data centers.
  • Reduced ISP Bandwidth Usage: By caching popular content closer to users, Netflix reduces congestion on internet backbone networks.
  • Optimized Streaming Quality: Ensures 4K and HDR content delivery with minimal buffering.

Netflix’s edge caching approach significantly improves the user experience while cutting costs on bandwidth-heavy cloud operations.

4. Netflix’s Tech Stack – From Frontend to Streaming Infrastructure

Netflix employs a vast and robust tech stack covering frontend, backend, databases, streaming, and CDN services.

Frontend Technologies:

  • React.js & Node.js – The Netflix UI is built using React.js for dynamic rendering, with Node.js supporting server-side rendering.
  • Redux & RxJS – For state management and handling asynchronous data streams.
  • GraphQL & Falcor – Efficient data-fetching mechanisms to optimize API responses.

Backend Technologies:

  • Java & Spring Boot – Most microservices are built using Java with Spring Boot.
  • Python & Go – Used for various backend services, especially in machine learning and observability tools.
  • gRPC & REST APIs – High-performance communication between microservices.

Databases & Storage:

  • DynamoDB & Cassandra – NoSQL databases for user preferences, watch history, and metadata storage.
  • MySQL – Used for transactional data such as billing and payments.
  • S3 & EBS (Elastic Block Store) – For storing logs, metadata, and assets.

Event-Driven Architecture:

  • Apache Kafka & AWS Kinesis – Handles event streaming, real-time analytics, and log processing.

Streaming Infrastructure:

  • FFmpeg – Used for video encoding and format conversion.
  • VMAF (Video Multi-Method Assessment Fusion) – Netflix’s AI-powered quality assessment tool to optimize streaming quality.
  • DASH & HLS Protocols – Adaptive bitrate streaming protocols to adjust video quality dynamically.

Content Delivery – Open Connect CDN:

Netflix has built its own CDN (Content Delivery Network), Open Connect, which:

  • Deploys dedicated caching servers at ISP locations.
  • Reduces network congestion and improves video streaming quality.
  • Uses BGP routing to optimize data transfer to end users.

Observability & Performance Monitoring:

  • Atlas – Netflix’s real-time telemetry platform.
  • Eureka – Service discovery tool for microservices.
  • Hystrix – Circuit breaker for handling failures.
  • Zipkin – Distributed tracing to analyze request flow across services.
  • Spinnaker – Manages multi-cloud deployments.

Security & Digital Rights Management (DRM):

  • Widevine, PlayReady, and FairPlay DRM – To protect digital content from piracy.
  • Token-Based Authentication – Ensures secure API calls between microservices.
  • AI-powered Fraud Detection – Uses machine learning to prevent credential stuffing and account sharing abuse.

5. Resilience and Fault Tolerance – Chaos Engineering

Netflix ensures high availability using Chaos Engineering, a discipline where failures are deliberately introduced to test system resilience. Their famous Chaos Monkey tool randomly shuts down services to verify automatic recovery mechanisms. Other tools in their Simian Army include:

  • Latency Monkey: Introduces artificial delays to simulate network slowdowns.
  • Conformity Monkey: Detects non-standard or misconfigured instances and removes them.
  • Chaos Gorilla: Simulates the failure of entire AWS regions to test system-wide resilience.

Why Chaos Engineering?

Netflix must be prepared for unexpected failures, whether caused by network issues, cloud provider outages, or software bugs. By proactively testing failures, Netflix ensures that users never experience downtime.

6. Personalisation & AI – The Brain Behind Netflix Recommendations

Netflix’s recommendation engine is powered by Machine Learning and Deep Learning algorithms that analyze:

  • Watch history – What users have previously watched.
  • User interactions – Browsing behavior, pauses, skips, and rewatches.
  • Content metadata – Genre, actors, directors, cinematography styles, and even scene compositions.
  • Collaborative filtering – Finds similar users and suggests content based on shared preferences.
  • Contextual Bandit Algorithms – A form of reinforcement learning that adjusts recommendations in real-time based on user feedback.

Netflix employs A/B testing at scale, ensuring that every UI change, recommendation tweak, or algorithm update is rigorously tested before a full rollout.

7. Observability & Monitoring – Tracking Millions of Events per Second

With millions of users watching content simultaneously, Netflix must track system performance in real time. Key monitoring tools include:

  • Atlas – Netflix’s real-time telemetry platform for tracking system health.
  • Eureka – Service discovery tool for routing traffic between microservices.
  • Hystrix – Circuit breaker library to prevent cascading failures.
  • Spinnaker – Automated deployment tool for rolling out software updates seamlessly.
  • Zipkin – Distributed tracing tool to analyze request flow across microservices.

This observability stack allows Netflix to proactively detect anomalies, reducing the risk of performance degradation.

8. Security & Privacy – Keeping Netflix Safe

Netflix takes security seriously, implementing:

  • End-to-End Encryption: Protects user data and streaming content from unauthorized access.
  • Multi-Factor Authentication (MFA): Prevents account takeovers.
  • Access Control & Role-Based Policies: Restricts employee access to sensitive services.
  • DRM (Digital Rights Management): Prevents unauthorized content distribution through watermarking and encryption.
  • Bot Detection & Fraud Prevention: Identifies and blocks credential stuffing attacks and account sharing abuse.

Final Thoughts – Why Netflix’s Architecture is a Gold Standard

Netflix’s ability to handle millions of concurrent users, deliver content with ultra-low latency, and recover from failures automatically is a testament to its world-class distributed system architecture. By leveraging cloud computing, microservices, machine learning, chaos engineering, and edge computing, Netflix has set the benchmark for high-scale applications.

Mastering System Design: The Ultimate Guide
+

Welcome to the 181 new who have joined us since last edition!

System design can feel overwhelming.
But it doesn't have to be.

The secret?
Stop chasing buzzwords.
Start understanding how real systems work — one piece at a time.

After 16+ years of working in tech, I’ve realized most engineers hit a ceiling not because of coding skills, but because they never learned to think in systems.

In this post, I’ll give you the roadmap I wish I had, with detailed breakdowns, examples, and principles that apply whether you’re preparing for an interview or building for scale.

📺 Prefer a Visual Breakdown?

I’ve put everything above into a step-by-step YouTube walkthrough with visuals and real-world examples.


Key components
Real-world case studies
Interview insights
What top engineers focus on

Architecture patterns

🔹 Step 1: Master the Fundamentals

System design begins with mastering foundational concepts that are universal to distributed systems.

Let’s go beyond the surface:

1. Distributed Systems

A distributed system is a collection of independent machines working together as one.
Most modern tech giants — Netflix, Uber, WhatsApp — run on distributed architectures.

Challenges include:

  • Coordination
  • State consistency
  • Failures and retries
  • Network partitions

Real-world analogy:
A remote team working on a shared document must keep in sync. Any update from one person must reflect everywhere — just like nodes in a distributed system syncing data.

2. CAP Theorem

The CAP Theorem says you can only pick two out of three:

  • Consistency: All nodes return the same data.
  • Availability: Every request gets a response.
  • Partition Tolerance: System continues despite network failure.

Example:

  • CP System (like MongoDB in default mode): Prioritizes consistency over availability.
  • AP System (like Couchbase): Prioritizes availability, tolerates inconsistency.

Trade-offs matter. A payment system must be consistent. A messaging app can tolerate delays or eventual consistency.

3. Replication

Replication improves fault toleranceavailability, and read performance by duplicating data.

Types:

  • Synchronous: Safer, but slower (waits for confirmation).
  • Asynchronous: Faster, but at risk of data loss during failure.

Example:
Gmail stores your emails across multiple data centers so they’re never lost — even if one server goes down.

4. Sharding

Sharding splits data across different servers or databases to handle scale.

Sharding strategies:

  • Range-based (e.g., user A–F on one shard)
  • Hash-based (distributes load evenly)
  • Geo-based (user data stored by region)

Example:
Twitter shards tweets by user ID to prevent one database from being a bottleneck for writes.

Complexity:
Sharding introduces cross-shard queries, rebalancing, and metadata management — but is essential for web-scale systems.

5. Caching

Caching reduces repeated computation and DB hits by storing precomputed or frequently accessed data in memory.

Types:

  • Client-side: Browser stores assets
  • Server-side: Redis or Memcached store DB results or objects
  • CDN: Caches static files at edge locations

Example:
Reddit caches user karma and post scores to avoid recalculating on every page load.

Challenges:

  • Cache invalidation
  • Choosing correct TTLs
  • Preventing stale data from affecting correctness

🔹 Step 2: Understand Core Components

These components are the Lego blocks of modern system design.
Knowing when and how to use them is the architect’s superpower.

1. API Gateway

The entry point for all client requests in a microservices setup.

Responsibilities:

  • Auth & token validation
  • SSL termination
  • Request routing
  • Rate limiting & throttling

Example:
Netflix’s Zuul API Gateway routes millions of requests per second and enforces rules like regional restrictions or A/B testing.

2. Load Balancer

Distributes traffic evenly across servers to maximize availability and reliability.

Key benefits:

  • Prevents any one server from overloading
  • Supports horizontal scaling
  • Enables health checks and failover

Example:
Amazon uses Elastic Load Balancers to distribute checkout traffic across zones — ensuring consistent performance even during Black Friday sales.

3. Database (SQL & NoSQL)

Both database types are useful — but for different needs.

SQL (PostgreSQL, MySQL):

  • Great for transactional consistency (e.g., banking)
  • Joins, constraints, ACID guarantees

NoSQL (MongoDB, Cassandra, DynamoDB):

  • Schema flexibility
  • High scalability
  • Eventual consistency models

Example:
Facebook uses MySQL for social graph relations and TAO (a NoSQL layer) for scalable reads/writes on user feeds.

4. Cache Layer

A low-latency, high-speed memory layer (usually Redis or Memcached) that stores hot data.

Use cases:

  • Session storage
  • Leaderboards
  • Search autocomplete
  • Expensive DB joins

Example:
Pinterest uses Redis to cache user boards, speeding up access by 10x while reducing DB load significantly.

5. Message Queue

Enables asynchronous communication between services.

Why use it:

  • Decouples producers and consumers
  • Handles retries, failures, delays
  • Smooths traffic spikes (buffering)

Popular tools:

  • Kafka (high-throughput streams)
  • RabbitMQ (complex routing)
  • AWS SQS (fully managed)

Example:
Spotify uses Kafka to process billions of logs and user events daily, which are then used for recommendations and analytics.

6. Content Delivery Network (CDN)

A global layer of edge servers that serve static content from locations closest to the user.

Improves:

  • Page load speed
  • Media streaming quality
  • Global availability

Example:
YouTube videos are cached across CDN nodes worldwide, so when someone in Brazil presses “play,” it loads from a nearby node — not from California.

Bonus:
CDNs often include DDoS protection and analytics.

🔹 Step 3: Learn Architecture Patterns That Actually Scale

Architecture is not one-size-fits-all.
Choosing the right pattern depends on team size, product stage, scalability needs, and performance requirements.

Let’s look at a few patterns every engineer should understand.

1. Monolithic Architecture

All logic — UI, business, and data access — lives in a single codebase.

Pros:

  • Easier to build and deploy initially
  • Great for early-stage startups
  • No network overhead

Cons:

  • Harder to scale teams
  • Tight coupling
  • Difficult to adopt new tech in parts

Example:
Early versions of Instagram were monoliths in Django and Postgres — simple, fast, effective.

2. Microservices Architecture

System is split into independent services, each owning its domain.

Pros:

  • Independent deployments
  • Better scalability
  • Polyglot architecture (teams choose tech)

Cons:

  • Complex networking
  • Needs API gateway, service discovery, observability
  • Cross-service debugging is hard

Example:
Amazon migrated to microservices to allow autonomous teams to innovate faster. Each service communicates over well-defined APIs.

3. Event-Driven Architecture

Services don’t call each other directly — they publish or subscribe to events.

Pros:

  • Asynchronous processing
  • Loose coupling
  • Natural scalability

Cons:

  • Event ordering issues
  • Difficult to debug
  • Requires strong observability

Example:
Uber’s trip lifecycle is event-driven: request → accept → start → end. Kafka handles the orchestration of millions of rides in real time.

4. Pub/Sub Pattern

Publishers send messages to a topic, and subscribers receive updates.

Use Cases:

  • Notification systems
  • Logging
  • Analytics pipelines

Tools:

  • Kafka, Google Pub/Sub, Redis Streams

Example:
Slack uses Pub/Sub internally to update message feeds across devices instantly when a message is received.

5. CQRS (Command Query Responsibility Segregation)

Separate models for writing (commands) and reading (queries).

Why it’s useful:

  • Optimizes read-heavy systems
  • Allows different scaling strategies
  • Reduces read-write contention

Example:
E-commerce apps use CQRS to process orders (write) and show order history (read) via different services, often with denormalized read models.

Sure! Here's a concise and impactful conclusion/summary for your Substack article:

🔚 Conclusion

Mastering system design isn't about memorizing diagrams or buzzwords — it's about understanding how systems behave, scale, and fail in the real world.

Start with the fundamentals: distributed systems, replication, sharding, and caching.
Then, dive deep into core components like API gateways, load balancers, databases, caches, queues, and CDNs.
Finally, learn to apply the right architecture patterns — from monoliths to microservices, event-driven systems to CQRS.

Whether you're prepping for interviews or building production-grade apps, always ask:
“What are the trade-offs?” and
“Where’s the bottleneck?”

Caching 101: Everything You Need to Know
+


Introduction to Caching

In the relentless pursuit of speed, where every millisecond shapes user experience and business outcomes, caching stands as the most potent weapon in a system’s arsenal. Caching is the art and science of storing frequently accessed data, computations, or responses in ultra-fast memory, ensuring they’re instantly available without the costly overhead of recomputing or fetching from slower sources like disks, databases, or remote services. By caching everything—from static assets like images and JavaScript to dynamic outputs like API responses and machine learning predictions—systems can slash latency from hundreds of milliseconds to mere microseconds, delivering near-instantaneous responses that users expect in today’s digital world.

Why Caching Matters

Caching is a fundamental technique in computer science and system design that significantly enhances the performance, scalability, and reliability of applications. By storing frequently accessed data in a fast, temporary storage layer, caching minimizes the need to repeatedly fetch or compute data from slower sources like disks, databases, or remote services.

1. Latency Reduction

Caching drastically reduces the time it takes to retrieve data by storing it in high-speed memory closer to the point of use. The latency difference between various storage layers is stark:

  • CPU Cache (L1/L2): Access times are in the range of 1–3 nanoseconds.
  • RAM (e.g., Redis, Memcached): Access times are around 10–100 microseconds.
  • SSD: Access times are approximately 100 microseconds to 1 millisecond.
  • HDD: Access times are in the range of 5–10 milliseconds.
  • Network Calls (e.g., API or database queries over the internet): These can take 10–500 milliseconds, depending on network latency and server response times.

Example Scenarios:

  • Redis Cache Hit: Retrieving a user session from Redis takes ~0.5ms, compared to a PostgreSQL query fetching the same data in ~50ms. For a high-traffic application with millions of users, this shaves seconds off cumulative response times.
  • CDN Edge Caching: A content delivery network (CDN) like Cloudflare caches static assets (e.g., images, CSS, JavaScript) at edge locations worldwide. A user in Tokyo accessing a cached image might experience a 10ms latency, compared to 200ms if the request hits the origin server in the US.
  • Browser Caching: Storing a webpage’s static resources in the browser cache eliminates round-trips to the server, reducing page load times from 1–2 seconds to under 100ms for subsequent visits.

Technical Insight:

Caching exploits the principle of locality (temporal and spatial), where recently or frequently accessed data is likely to be requested again. By keeping this data in faster storage layers, systems avoid bottlenecks caused by slower IO operations.

2. Reduced Load on Backend Systems

Caching acts as a buffer between the frontend and backend, shielding resource-intensive services like databases, APIs, or microservices from excessive requests. This offloading is critical for maintaining system stability under high load.

How It Works:

  • Database Offloading: Caching frequently queried data (e.g., user profiles, product details) in an in-memory store like Redis or Memcached reduces database read operations.
  • API Offloading: Caching API responses (e.g., weather data or stock prices) prevents repeated calls to external services, which often have rate limits or high latency.
  • Compute Offloading: For computationally expensive operations like machine learning inferences or image rendering, caching results avoids redundant processing.

3. Improved Scalability

Caching enables systems to handle massive traffic spikes without requiring proportional increases in infrastructure. By serving data from cache, systems reduce the need for additional servers, databases, or compute resources.

Key Mechanisms:

  • Horizontal Scaling with CDNs: CDNs like Akamai or Cloudflare distribute cached content across global edge servers, serving millions of users without hitting the origin server.
  • In-Memory Caching: Tools like Redis or Memcached allow applications to scale horizontally by adding cache nodes, which are cheaper and easier to manage than scaling databases or compute clusters.
  • Load Balancing with Caching: Caching at the application layer (e.g., Varnish for web servers) distributes load efficiently, allowing systems to scale to millions of requests per second.

4. Enhanced User Experience

Low latency and fast response times directly translate to a better user experience, which is critical for user retention and engagement. Caching ensures that applications feel responsive and seamless.

Technical Insight:

Caching aligns with the performance budget concept in web development, where every millisecond counts. Studies show that a 100ms delay in page load time can reduce conversion rates by 7%. Caching helps meet these stringent performance requirements.

5. Cost Efficiency

Caching reduces the need for expensive resources, such as high-performance databases, GPU compute, or frequent API calls, leading to significant cost savings in cloud environments.

Cost-Saving Scenarios:

  • Database Costs: By caching query results, systems reduce database read operations, lowering costs for managed database services like AWS RDS or Google Cloud SQL.
  • Compute Costs: Caching the output of machine learning models (e.g., recommendation systems or image processing) in memory avoids redundant GPU or TPU usage.
  • API Costs: Caching responses from paid third-party APIs (e.g., Google Maps or payment gateways) reduces the number of billable requests.

Types of Caches

Caching can be implemented at every layer of the technology stack to eliminate redundant computations and data fetches, ensuring optimal performance. Each layer serves a specific purpose, leveraging proximity to the user or application to reduce latency and resource usage. Below is an in-depth look at the types of caches, their use cases, and advanced applications.

1. Browser Cache

The browser cache stores client-side resources, enabling instant access without network requests. It’s the first line of defense for web and mobile applications, reducing server load and improving user experience.

  • What’s Cached: HTML, CSS, JavaScript, images, fonts, media files, API responses, and dynamic data (via Service Workers, localStorage, or IndexedDB).
  • Performance Impact: Using HTTP headers like Cache-Control: max-age=86400 or ETag, browsers can serve entire web pages or assets in 0–10ms, compared to 100–500ms for network requests.
  • Mechanisms:
    • HTTP Cache Headers: Cache-Control, Expires, and ETag dictate how long resources are cached and when to validate them.
    • Service Workers: Enable programmatic caching of API responses and dynamic content, supporting offline functionality.
    • Local Storage/IndexedDB: Store JSON payloads or user-specific data (e.g., preferences, form data) for instant rendering.

2. CDN Cache

Content Delivery Networks (CDNs) like Cloudflare, Akamai, or AWS CloudFront cache content at edge nodes geographically closer to users, minimizing latency and offloading origin servers.

  • What’s Cached: Static assets (images, CSS, JavaScript), dynamic HTML, API responses, GraphQL query results, and even streaming media.
  • Performance Impact: Edge nodes reduce latency from 100–500ms (origin server) to 5–20ms by serving cached content locally. For example, caching a news article in Singapore cuts latency from 200ms (US server) to 10ms.
  • Mechanisms:
    • Edge Caching: Stores content at global points of presence (PoPs).
    • Cache Purging: Supports manual or event-driven invalidation (e.g., via webhooks or APIs).
    • Custom Rules: CDNs like Cloudflare allow caching of dynamic content with fine-grained rules (e.g., cache API responses for 1 minute).
  • Challenges: Cache invalidation for dynamic content, potential for stale data, and costs for high-traffic or large-scale caching.

3. Edge Cache

Edge caches, implemented via serverless platforms like Cloudflare Workers, AWS Lambda@Edge, or Fastly Compute, cache dynamically generated content closer to the user, blending the benefits of CDNs and application logic.

  • What’s Cached: Personalized pages, A/B test variants, localized translations, API responses, and real-time computations (e.g., cart summaries with discounts).
  • Performance Impact: Edge caches deliver in 5–15ms, bypassing backend servers and reducing latency by 80–90%.
  • Mechanisms:
    • Serverless Compute: Executes lightweight logic to generate or fetch content, then caches it at the edge.
    • Short-Lived Caching: Uses low TTLs (e.g., 10 seconds) for dynamic data like user sessions or real-time pricing.
  • Challenges: Limited compute resources in serverless environments, complex invalidation for user-specific data, and potential consistency issues.

4. Application-Level Cache

Application-level caches, typically in-memory stores like Redis, Memcached, or DynamoDB Accelerator (DAX), handle application-specific data, reducing backend queries and computations.

  • What’s Cached: API responses, user sessions, computed aggregations, temporary states, ML model predictions, and pre-rendered HTML fragments.
  • Performance Impact: Cache hits in Redis or Memcached take 0.1–0.5ms, compared to 10–100ms for database queries or API calls.
  • Mechanisms:
    • Key-Value Stores: Redis and Memcached store data as key-value pairs for fast retrieval.
    • Distributed Caching: Redis Cluster or DAX scales caching across multiple nodes.
    • Serialization: Caches complex objects (e.g., JSON, Protobuf) for efficient storage and retrieval.
  • Challenges: Memory costs for large datasets, cache invalidation complexity, and ensuring consistency for write-heavy workloads.

5. Database Cache

Database caches store query results, indexes, and execution plans within or alongside the database, optimizing read performance for repetitive queries.

  • What’s Cached: Query results, prepared statements, table metadata, and index lookups.
  • Performance Impact: Database caches (e.g., MySQL Query Cache, PostgreSQL’s shared buffers) return results in 1–5ms, compared to 10–50ms for uncached queries.
  • Mechanisms:
    • Internal Caching: MySQL’s query cache (when enabled) or PostgreSQL’s shared buffers store frequently accessed data.
    • External Caches: Tools like Amazon ElastiCache or Redis sit in front of databases, caching results for complex queries.
    • Prepared Statements: Databases cache execution plans for repeated queries, reducing parsing overhead.
  • Challenges: Limited cache size in databases, invalidation on data updates, and overhead for write-heavy workloads.

6. Distributed Cache

Distributed caches share data across multiple nodes in a microservices architecture, ensuring low-latency access for distributed systems.

  • What’s Cached: User profiles, session data, configuration settings, transaction metadata, and inter-service API responses.
  • Performance Impact: Distributed caches like Redis Cluster or Hazelcast deliver data in 0.5–2ms, avoiding 10–100ms cross-service calls.
  • Mechanisms:
    • Sharding: Distributes cache data across nodes for scalability.
    • Replication: Ensures high availability by replicating cache data.
    • Pub/Sub: Supports event-driven invalidation or updates (e.g., Redis Pub/Sub,

System: /Sub).

  • Challenges: Network overhead, data consistency across nodes, and higher operational complexity.

Caching Strategies

Caching strategies dictate how data is stored, retrieved, and updated to maximize efficiency and consistency. Each strategy is suited to specific use cases, balancing performance, consistency, and complexity.

1. Read-Through Cache

The cache acts as a proxy, fetching data from the backend on a miss and storing it automatically.

  • How It Works: The application queries the cache; on a miss, the cache fetches, stores, and returns the data.
  • Performance Impact: Cache hits take 0.1–1ms, compared to 10–500ms for backend fetches.
  • Use Case: Ideal for read-heavy workloads like search results or static data.
  • Example: A search engine caches query results (ranked documents, ads) in Redis, reducing latency from 300ms to 1ms. Libraries like Spring Cache automate read-through logic.
  • Advanced Use Case: Caching GraphQL query results in a read-through cache, using query hashes as keys, for instant API responses.
  • Challenges: Cache miss latency, backend load during misses, and complex cache logic.

2. Write-Through Cache

Every write operation updates both the cache and backend synchronously, ensuring consistency.

  • How It Works: Writes are applied to the cache and backend atomically.
  • Performance Impact: Cache reads are fast (0.1–0.5ms), but writes are slower due to backend sync.
  • Use Case: Critical for consistent data like financial transactions or inventory.
  • Example: An e-commerce app writes inventory updates to MySQL and Redis simultaneously, serving cached stock levels in 0.4ms.
  • Advanced Use Case: Caching user authentication tokens in Redis with write-through, ensuring immediate availability and consistency.
  • Challenges: Write latency, increased backend load, and complexity of atomic operations.

3. Write-Behind Cache (Write-Back)

Writes are stored in the cache first and asynchronously synced to the backend, optimizing write performance.

  • How It Works: Data is written to the cache immediately and synced later (e.g., via batch jobs or queues).
  • Performance Impact: Writes are fast (0.1–0.5ms), with backend sync delayed (e.g., every 5 seconds).
  • Use Case: High-write workloads like user actions, logs, or metrics.
  • Example: A social media app caches posts in Redis, serving them in 0.5ms while batching MySQL writes every 5 seconds, reducing write latency by 90%.
  • Advanced Use Case: Caching IoT sensor data in a write-behind cache, syncing to a time-series database hourly for analytics.
  • Challenges: Risk of data loss on cache failure, eventual consistency, and sync complexity.

4. Cache-Aside (Lazy Loading)

The application explicitly manages caching, fetching and storing data on cache misses.

  • How It Works: The app checks the cache; on a miss, it fetches data, stores it in the cache, and returns it.
  • Performance Impact: Cache hits take 0.1–1ms, with full control over caching logic.
  • Use Case: Complex computations like ML inferences or dynamic data.
  • Example: A recommendation engine caches user suggestions in Memcached, reducing inference time from 600ms to 1ms.
  • Advanced Use Case: Caching database query results with custom logic to handle partial cache hits (e.g., fallback to stale data).
  • Challenges: Application complexity, cache stampede during misses, and manual invalidation.

5. Refresh-Ahead

The cache proactively refreshes data before expiration, ensuring freshness without miss penalties.

  • How It Works: The cache fetches updated data in the background based on access patterns or TTLs.
  • Performance Impact: Cache hits remain 0.1–0.5ms, with minimal miss spikes.
  • Use Case: Semi-static data like weather forecasts or stock prices.
  • Example: A weather app caches forecasts in Redis, refreshing them every 10 minutes, ensuring 0.3ms access and fresh data.
  • Advanced Use Case: Refreshing cached API responses for real-time sports scores, balancing freshness and performance.
  • Challenges: Background refresh overhead, predicting access patterns, and managing refresh frequency.

6. Additional Strategies

  • Write-Around: Writes bypass the cache, used for rarely accessed data to avoid cache pollution.
  • Cache Population: Pre-fills the cache with hot data during startup to avoid cold cache issues.
  • Stale-While-Revalidate: Serves stale data while fetching fresh data in the background, used by CDNs for dynamic content.

Comprehensive Example

A gaming platform employs multiple strategies:

  • Read-Through: Caches leaderboards in Redis for 1ms access.
  • Write-Through: Updates player stats in Redis and PostgreSQL atomically.
  • Write-Behind: Stores chat messages in Redis, syncing to disk every 5 seconds.
  • Cache-Aside: Caches game states in Memcached with custom logic.
  • Refresh-Ahead: Refreshes match schedules in Redis every minute.
  • Result: Every interaction is cached, delivering sub-millisecond performance.

d. Eviction and Invalidation Policies

Caching finite memory requires intelligent eviction and invalidation policies to manage space and ensure data freshness. These policies determine which data is removed and how stale data is handled.

1. LRU (Least Recently Used)

Evicts the least recently accessed items, prioritizing fresh data.

  • How It Works: Tracks access timestamps, removing the oldest accessed items.
  • Use Case: Dynamic data like user sessions or recent searches.
  • Performance Impact: Ensures high hit rates (>90%) for frequently accessed data.
  • Example: Redis with LRU evicts inactive user sessions, serving active ones in 0.3ms.
  • Advanced Use Case: Caching API tokens with LRU in a microservice, ensuring active tokens remain available.
  • Challenges: Memory overhead for tracking access times, potential eviction of valuable data.

2. LFU (Least Frequently Used)

Evicts items accessed least often, prioritizing popular data.

  • How It Works: Tracks access frequency, removing low-frequency items.
  • Use Case: Skewed access patterns like popular products or trending posts.
  • Performance Impact: Optimizes for high-frequency data, achieving 95% hit rates.
  • Example: A video platform caches top movies in Memcached with LFU, serving them in 0.4ms.
  • Advanced Use Case: Caching trending hashtags in Redis with LFU for social media analytics.
  • Challenges: Frequency tracking overhead, risk of evicting new data too soon.

3. FIFO (First-In-First-Out)

Evicts the oldest data, regardless of access patterns.

  • How It Works: Removes data in the order it was added.
  • Use Case: Sequential data like logs or time-series metrics.
  • Performance Impact: Simple but less adaptive, with hit rates of 70–80%.
  • Example: A monitoring system caches recent metrics in Redis with FIFO, serving dashboards in 0.5ms.
  • Advanced Use Case: Caching event logs for real-time analytics with FIFO, ensuring recent data availability.
  • Challenges: Ignores access patterns, leading to lower hit rates.

4. TTL (Time-to-Live)

Evicts data after a fixed duration, ensuring freshness.

  • How It Works: Assigns expiration times to cache entries (e.g., 1 second, 1 hour).
  • Use Case: Time-sensitive data like stock prices or news feeds.
  • Performance Impact: Guarantees freshness with 0.1–0.5ms access times.
  • Example: A trading app caches market data with a 1-second TTL, serving it in 0.2ms.
  • Advanced Use Case: Randomized TTLs in Redis to avoid mass expirations, ensuring smooth cache performance.
  • Challenges: Mass expiration spikes, choosing appropriate TTLs.

5. Explicit Invalidation

Manually or event-driven cache clears triggered by data changes.

  • How It Works: Clears specific cache entries using APIs or event systems (e.g., Redis Pub/Sub, Kafka).
  • Use Case: Dynamic data like user profiles or CMS content.
  • Performance Impact: Ensures freshness with minimal latency overhead.
  • Example: A CMS invalidates cached pages in Cloudflare on content updates, serving fresh data in 10ms.
  • Advanced Use Case: Using Kafka to broadcast cache invalidation events across a microservices cluster.
  • Challenges: Event system complexity, potential for missed invalidations.

6. Versioned Keys

Cache keys include version numbers to serve fresh data without invalidation.

  • How It Works: Keys like user:v3:1234 ensure fresh data by updating version numbers.
  • Use Case: Frequently updated data like user profiles or configurations.
  • Performance Impact: Seamless updates with 0.1–0.5ms access times.
  • Example: An API caches user profiles with versioned keys, serving them in 0.3ms.
  • Advanced Use Case: Caching configuration settings with versioned keys in a CI/CD pipeline, ensuring instant updates.
  • Challenges: Key management complexity, potential for orphaned keys.

7. Additional Policies

  • Random Eviction: Evicts random items, used for simple caches with uniform access patterns.
  • Size-Based Eviction: Evicts largest items to free space, used for memory-constrained caches.
  • Priority-Based Eviction: Assigns priorities to cache items, evicting low-priority ones first.

Tooling and Frameworks ()

Caching tools and frameworks are critical for implementing effective caching strategies across various layers of the stack. These tools range from in-memory stores to distributed data grids and application-level abstractions, each designed to optimize performance, scalability, and ease of integration. Below is an in-depth look at the provided tools, additional frameworks, and their advanced applications.

1. Redis

Redis is an open-source, in-memory data structure store used as a cache, database, and message broker. Its versatility and performance make it a go-to choice for application-level and distributed caching.

  • Features:
    • In-Memory Storage: Stores data as key-value pairs, lists, sets, hashes, and more, with 0.1–0.5ms access times.
    • TTL Support: Time-to-Live (TTL) for automatic expiration of keys, ideal for time-sensitive data like session tokens or news feeds.
    • Persistence: Optional disk persistence (RDB snapshots, AOF logs) for durability.
    • Clustering: Redis Cluster shards data across nodes for scalability and high availability.
    • Pub/Sub: Supports event-driven cache invalidation via publish/subscribe channels.
    • Advanced Data Structures: Bitmaps, HyperLogLog, and geospatial indexes for specialized use cases.
  • Use Case: An e-commerce platform caches product details in Redis, serving them in 0.3ms vs. 50ms for a PostgreSQL query. Pub/Sub invalidates cache entries on inventory updates.

2. Memcached

Memcached is a lightweight, distributed memory object caching system optimized for simplicity and speed.

  • Features:
    • High Performance: Key-value store with sub-millisecond access times (0.1–0.4ms).
    • Distributed Architecture: Scales horizontally by sharding keys across nodes.
    • No Persistence: Purely in-memory, prioritizing speed over durability.
    • Multi-Threaded: Handles high concurrency efficiently.
  • Use Case: A news website caches article metadata in Memcached, reducing database queries by 90% and serving data in 0.4ms.
  • Advanced Use Case: Caching pre-rendered HTML fragments for a CMS, with LFU eviction to prioritize popular articles.
  • Example: Twitter uses Memcached to cache tweet metadata, handling millions of requests per second with <1ms latency.
  • Tools Integration: Memcached clients like libmemcached or pylibmc, and monitoring via Prometheus exporters.
  • Challenges: No built-in persistence, limited data structures (key-value only), and manual invalidation.

3. Caffeine (Java)

Caffeine is a high-performance, in-memory local caching library for Java, designed as a modern replacement for Guava Cache.

  • Features:
    • TTL and Size-Based Eviction: Supports time-based and maximum-size eviction policies.
    • Refresh-Ahead: Automatically refreshes cache entries based on access patterns.
    • Asynchronous Loading: Non-blocking cache population for low-latency applications.
    • High Throughput: Optimized for low-latency access (0.01–0.1ms) in single-process environments.
    • Statistics: Tracks hit/miss rates and eviction counts for monitoring.
  • Use Case: A Java-based web server caches configuration settings in Caffeine, serving them in 0.01ms vs. 1ms for Redis.

4. Hazelcast

Hazelcast is an open-source, distributed in-memory data grid that combines caching, querying, and compute capabilities.

  • Features:
    • Distributed Caching: Shards and replicates data across a cluster for scalability and fault tolerance.
    • Querying: SQL-like queries on cached data using predicates.
    • In-Memory Computing: Executes distributed tasks (e.g., MapReduce) on cached data.
    • High Availability: Automatic failover and replication.
    • Near Cache: Local caching on client nodes for ultra-low latency (0.01–0.1ms).
  • Use Case: A financial app caches market data in Hazelcast, enabling 0.5ms access across microservices.

5. Apache Ignite

Apache Ignite is a distributed in-memory data grid and caching platform with advanced querying and compute features.

  • Features:
    • Distributed Caching: Key-value and SQL-based caching across nodes.
    • ACID Transactions: Supports transactional consistency for cached data.
    • SQL Queries: ANSI SQL support for querying cached data.
    • Compute Grid: Executes distributed computations on cached data.
    • Persistence: Optional disk persistence for durability.
  • Use Case: A banking app caches transaction metadata in Ignite, enabling 0.5ms access with ACID guarantees.

6. Spring Cache

Spring Cache is a Java framework abstraction for application-level caching, supporting pluggable backends like Redis, Memcached, or Caffeine.

  • Features:
    • Declarative Caching: Annotations like @Cacheable, @CachePut, and @CacheEvict simplify caching logic.
    • Pluggable Backends: Integrates with Redis, Ehcache, Caffeine, and others.
    • Cache Abstraction: Provides a consistent API across caching providers.
    • Conditional Caching: Supports custom cache keys and conditions.
  • Use Case: A Spring Boot app caches REST API responses in Redis via @Cacheable, reducing latency from 50ms to 0.3ms.

7. Django Cache

Django Cache is a Python framework abstraction for caching in Django applications, supporting multiple backends.

  • Features:
    • Flexible Backends: Supports Redis, Memcached, database caching, and in-memory caching.
    • Per-Site Caching: Caches entire pages or views.
    • Per-View Caching: Caches specific view outputs with decorators like @cache_page.
    • Low-Level API: Fine-grained control for caching arbitrary data.
  • Use Case: A Django-based blog caches rendered pages in Memcached, serving them in 0.4ms vs. 20ms for database rendering.

Metrics to Monitor

Monitoring caching performance is critical to ensure high hit rates, low latency, and efficient resource usage. Below is an expanded list of metrics to track, along with monitoring techniques, tools, and examples to optimize cache performance.

1. Cache Hit Rate / Miss Rate

  • Definition: The percentage of requests served from the cache (hit rate) vs. those requiring backend fetches (miss rate).
  • Importance: High hit rates (>90%) indicate effective caching; high miss rates signal poor cache utilization or invalidation issues.
  • Monitoring:
    • Use tools like Redis INFO, Memcached stats, or Caffeine’s statistics API to track hits and misses.
    • Visualize with Prometheus and Grafana dashboards for real-time insights.
    • Set alerts for hit rates dropping below 80%.
  • Example: A Redis cache for product details achieves a 95% hit rate, serving 95% of requests in 0.3ms. A sudden drop to 70% triggers an alert, revealing a misconfigured TTL.
  • Tools: Prometheus, Grafana, RedisInsight, AWS CloudWatch.

2. Eviction Count

  • Definition: The number of items removed from the cache due to memory constraints or eviction policies (e.g., LRU, LFU).
  • Importance: High eviction counts indicate insufficient cache size or poor eviction policy tuning.
  • Monitoring:
    • Track evictions via Redis evicted_keys or Memcached evictions stats.
    • Use time-series databases like Prometheus to analyze eviction trends.
    • Set thresholds for excessive evictions (e.g., >1000/hour).
  • Example: A Memcached instance evicts 500 keys per minute due to a small cache size, prompting a resize to 16GB to maintain hit rates.
  • Tools: Prometheus, Grafana, Hazelcast Management Center.

3. Latency of Reads/Writes

  • Definition: The time taken for cache read (hit/miss) and write operations.
  • Importance: Ensures cache operations meet performance goals (e.g., <1ms for reads, <2ms for writes).
  • Monitoring:
    • Measure latency percentiles (P50, P95, P99) using tools like Micrometer or AWS CloudWatch.
    • Log slow operations (>10ms) for investigation.
    • Compare cache latency to backend latency to quantify savings.
  • Example: Redis read latency averages 0.3ms, but P99 spikes to 5ms during high traffic, indicating contention or network issues.
  • Tools: Prometheus, Grafana, Micrometer, New Relic.

4. Memory Usage

  • Definition: The amount of memory consumed by the cache, including total and per-key usage.
  • Importance: Prevents memory exhaustion and ensures cost efficiency.
  • Monitoring:
    • Track memory usage via Redis used_memory or Memcached bytes stats.
    • Monitor memory fragmentation (e.g., Redis mem_fragmentation_ratio).
    • Set alerts for memory usage exceeding 80% of capacity.
  • Example: A Redis instance reaches 90% memory usage, triggering an alert to scale up or optimize key sizes.
  • Tools: RedisInsight, AWS CloudWatch, Prometheus.

5. Key Distribution and Skew

  • Definition: The distribution of keys across cache nodes and access frequency skew.
  • Importance: Identifies hot keys or uneven sharding that degrade performance.
  • Monitoring:
    • Use Redis Cluster’s key distribution stats or Hazelcast’s partition metrics.
    • Track hot keys with high access rates using Redis MONITOR or custom logging.
    • Visualize skew with heatmaps in Grafana.
  • Example: A Redis Cluster shows 80% of requests hitting one node due to a hot key (e.g., trending product), prompting key re-sharding.
  • Tools: RedisInsight, Hazelcast Management Center, Grafana.

6. TTL Effectiveness and Stale Reads

  • Definition: Measures how well TTLs balance freshness and hit rates, and the frequency of stale data served.
  • Importance: Ensures data freshness without sacrificing performance.
  • Monitoring:
    • Track expired keys via Redis expired_keys or custom TTL tracking.
    • Log stale reads by comparing cache vs. backend data versions.
    • Set alerts for high stale read rates (>1%).
  • Example: A news app with a 1-minute TTL for articles sees 5% stale reads, prompting a refresh-ahead strategy to reduce staleness.
  • Tools: Prometheus, Grafana, custom logging with ELK Stack.

Monitoring Tools

  • Prometheus: Time-series monitoring for cache metrics, with exporters for Redis, Memcached, and Hazelcast.
  • Grafana: Visualizes cache performance with dashboards for hit rates, latency, and memory.
  • RedisInsight: GUI for monitoring Redis metrics, key patterns, and performance.
  • AWS CloudWatch: Monitors ElastiCache and other cloud-based caches.
  • New Relic / Datadog: Application performance monitoring with cache-specific plugins.
  • ELK Stack: Logs cache errors and stale reads for root-cause analysis.
  • Micrometer: Integrates with Spring Cache and Caffeine for application-level metrics.

Conclusion

Caching is a multi-faceted technique that spans every layer of the stack—browser, CDN, edge, application, database, distributed, and local caches—each optimized for specific data and access patterns. By employing strategies like read-through, write-through, write-behind, cache-aside, and refresh-ahead, systems can cache every computation and data fetch, achieving sub-millisecond performance. Eviction and invalidation policies like LRU, LFU, FIFO, TTL, explicit invalidation, and versioned keys ensure efficient memory use and data freshness. Real-world applications, such as streaming platforms and e-commerce sites, leverage these techniques to handle millions of requests with minimal latency and cost, demonstrating the power of a well-designed caching architecture.

System Design : Load Balancer vs Reverse Proxy vs Forward Proxy vs API Gateway
+

 

 

Welcome to the 229 new who have joined us since last edition!

If you aren’t subscribed yet, join smart, curious folks by subscribing below.

Thanks for reading Rocky’s Newsletter ! Subscribe for free to receive new posts and support my work.

Thanks for reading Rocky’s Newsletter ! Subscribe for free to receive new posts and support my work.

In the intricate architecture of network communications, the roles of Load Balancers, Reverse Proxies, Forward Proxies, and API Gateways are pivotal. Each serves a distinct purpose in ensuring efficient, secure, and scalable interactions within digital ecosystems. As organisations strive to optimise their network infrastructure, it becomes imperative to understand the nuanced functionalities of these components. In this comprehensive exploration, we will dissect Load Balancers, Reverse Proxies, Forward Proxies, and API Gateways, shedding light on how they work, their specific use cases, and the unique contributions they make to the world of network technology.

Load Balancer:

Overview: A Load Balancer acts as a traffic cop, distributing incoming network requests across multiple servers to ensure no single server is overwhelmed. This not only optimises resource utilisation but also enhances the scalability and reliability of web applications.

How it Works:

A load balancer acts as a traffic cop, directing incoming requests to different servers based on various factors. These factors include:

  • Server load: Directing traffic to less busy servers.
  • Server health: Ensuring requests are sent to healthy servers.
  • Round-robin: Distributing traffic evenly among servers.
  • Least connections: Sending requests to the server with the fewest active connections.

 

Once a request is sent to a server, the server processes the request and sends a response back to the load balancer, which then forwards it to the client.

Benefits of Load Balancing

  • Improved performance: By distributing traffic across multiple servers, load balancers can significantly improve website or application speed.
  • Increased availability: If one server fails, the load balancer can redirect traffic to other available servers, minimising downtime.
  • Enhanced scalability: Load balancers can handle increasing traffic by adding more servers to the pool.
  • Optimised resource utilisation: By evenly distributing traffic, load balancers prevent server overload and maximise resource efficiency.

Types of Load Balancers

There are two main types of load balancers:

  • Hardware load balancers: Dedicated devices with high performance and reliability.
  • Software load balancers: Software applications that can run on servers, virtual machines, or in the cloud.

Real-world Applications

Load balancers are used in a wide range of applications, including:

  • E-commerce websites: Handling high traffic during sales or promotions.
  • Online gaming platforms: Ensuring smooth gameplay for multiple players.
  • Cloud computing environments: Distributing workloads across virtual machines.
  • Content delivery networks (CDNs): Optimising content delivery to users worldwide.

Reverse Proxy:

Overview: A Reverse Proxy serves as an intermediary between client devices and web servers. It receives requests from clients on behalf of the servers, acting as a gateway to handle tasks such as load balancing, SSL termination, and caching.

How it Works: How Does it Work?

When a client requests a resource, the request is directed to the reverse proxy. The proxy then fetches the requested content from the origin server and delivers it to the client. This process provides several benefits:

  • Load balancing: Distributes incoming traffic across multiple origin servers.
  • Caching: Stores frequently accessed content locally, reducing response times.
  • Security: Protects origin servers by acting as a shield against attacks.
  • SSL termination: Handles SSL/TLS encryption and decryption, offloading the process from origin servers.

Benefits of a Reverse Proxy

  • Improved performance: Caching and load balancing enhance website speed.
  • Enhanced security: Protects origin servers from attacks like DDoS and SQL injection.
  • Scalability: Handles increased traffic without impacting origin servers.
  • Flexibility: Allows for A/B testing and geo-location routing.

Common Use Cases

  • Content Delivery Networks (CDNs): Distributes content across multiple locations for faster delivery.
  • Web application firewalls (WAFs): Protects web applications from attacks.
  • Load balancing: Distributes traffic across multiple servers.
  • API gateways: Manages API traffic and security.

Forward Proxy:

Overview: A Forward Proxy, also known simply as a proxy, acts as an intermediary between client devices and the internet. It facilitates requests from clients to external servers, providing functionalities such as content filtering, access control, and anonymity.

How Does it Work?

When a client wants to access a resource on the internet, it sends a request to the forward proxy. The proxy then fetches the requested content from the origin server and delivers it to the client. This process involves several steps:

  1. Client connects to the proxy server.
  2. Client sends a request to the proxy.
  3. Proxy forwards the request to the origin server.
  4. Origin server sends the response to the proxy.
  5. Proxy forwards the response to the client.

Benefits of a Forward Proxy

  • Caching: Stores frequently accessed content locally, reducing response times.
  • Security: Protects clients by filtering malicious content and hiding their IP addresses.
  • Access control: Restricts internet access based on user or group policies.
  • Anonymity: Allows users to browse the internet without revealing their identity.

Common Use Cases

  • Content filtering: Blocks access to inappropriate or harmful websites.
  • Parental control: Restricts online activities for children.
  • Corporate network security: Protects internal networks from external threats.
  • Anonymity: Enables users to browse the internet privately.

API Gateway:

Overview: An API Gateway is a server that acts as an API front-end, receiving API requests, enforcing throttling and security policies, passing requests to the back-end service, and then passing the response back to the requester. It serves as a central point for managing, monitoring, and securing APIs.

How Does it Work?

  1. Request Reception: The API Gateway receives API requests from clients.
  2. Request Processing: It processes the request, applying policies like authentication, authorisation, rate limiting, and caching.
  3. Routing: The gateway forwards the request to the appropriate backend service based on defined rules.
  4. Response Aggregation: It aggregates responses from multiple services, if necessary, and returns a unified response to the client.

Benefits of an API Gateway

  • Improved performance: Caching, load balancing, and request aggregation can enhance performance.
  • Enhanced security: Provides a centralised point for enforcing security policies.
  • Simplified development: Isolates clients from backend complexities.
  • Monetisation and analytics: Enables tracking API usage and generating revenue.

Common Use Cases

  • Microservices architectures: Manages communication between multiple microservices.
  • Mobile app development: Provides a unified interface for mobile apps to access backend services.
  • API management: Enforces API policies, monitors usage, and generates analytics.
  • IoT applications: Handles a large number of devices and data streams.

 

Key Features of an API Gateway

  • Authentication and authorisation: Verifies user identity and permissions.
  • Rate limiting: Prevents API abuse through throttling.
  • Caching: Improves performance by storing frequently accessed data.
  • Load balancing: Distributes traffic across multiple backend services.
  • API versioning: Manages different API versions.
  • Fault tolerance: Handles failures gracefully.
  • Monitoring and analytics: Tracks API usage and performance.

Conclusion:

In the intricate web of network components, Load Balancers, Reverse Proxies, Forward Proxies, and API Gateways play distinct yet interconnected roles. Load Balancers ensure even distribution of traffic to optimise server performance, while Reverse Proxies act as intermediaries for clients and servers, enhancing security and performance.

Forward Proxies, on the other hand, serve as gatekeepers between client devices and the internet, enabling content filtering and providing anonymity. Lastly, API Gateways streamline the management, security, and accessibility of APIs, serving as centralised hubs for diverse services.

Understanding the unique functionalities of these components is essential for organisations seeking to build robust, secure, and scalable network infrastructures. As technology continues to advance, the synergy of Load Balancers, Reverse Proxies, Forward Proxies, and API Gateways will remain pivotal in shaping the future of network architecture.

Choosing Your Database: What Every Engineer Should Know
+

Welcome to the 149 new who have joined us since last edition!

If you aren’t subscribed yet, join smart, curious folks by subscribing below.

Thanks for reading Rocky’s Newsletter ! Subscribe for free to receive new posts and support my work.

Thanks for reading Rocky’s Newsletter ! Subscribe for free to receive new posts and support my work.

Introduction

Choosing the right database is a critical decision that can significantly impact the performance, scalability, and maintainability of your application. With a plethora of options available, ranging from traditional SQL databases to modern NoSQL solutions, making the right choice requires a deep understanding of your application's needs, the nature of your data, and the specific use cases you are targeting. This article aims to guide you through the different types of databases, their typical use cases, and the factors to consider when selecting the best one for your project.

Selecting the right database is more than just a technical decision; it's a strategic choice that affects how efficiently your application runs, how easily it scales, and how well it meets user expectations. Whether you’re building a small web app or a large enterprise system, the database you choose will influence data management, user experience, and operational costs.

SQL Databases

Use Cases

SQL (Structured Query Language) databases are the traditional backbone of many applications, particularly where data is structured, relationships are welldefined, and consistency is paramount. These databases are known for their strong ACID (Atomicity, Consistency, Isolation, Durability) properties, which ensure data integrity and reliable transactions.

Examples

MySQL: An open source relational database widely used for web applications.

PostgreSQL: Known for its extensibility and support for advanced data types and complex queries.

Microsoft SQL Server: A comprehensive enterprise level database solution with robust features.

Oracle: A scalable and secure platform suitable for mission critical applications.

SQLite: A lightweight, server-less database of ten used in embedded systems or small scale applications.

When to Use SQL Databases

Opt for SQL databases when your application requires a stable and well defined schema, strict consistency, and the ability to handle complex transactions. These databases are ideal for financial systems, ecommerce platforms, and any application where data relationships and integrity are crucial.

NewSQL Databases

Use Cases

NewSQL databases aim to blend the scalability of NoSQL with the strong consistency guarantees of traditional SQL databases. They are designed to handle largescale applications with distributed architectures, providing the benefits of SQL while enabling horizontal scalability.

Examples

CockroachDB: A distributed SQL database known for its strong consistency and global distribution capabilities.

Google Spanner: A globally distributed database that offers strong consistency and horizontal scalability.

When to Use NewSQL Databases

Choose NewSQL databases for applications that require both the consistency of SQL and the scalability of NoSQL. These databases are particularly suited for large scale applications that demand high availability and reliable distributed transactions.

Data Warehouses

Use Cases

Data warehouses are specialised for storing and analysing large volumes of data. They are optimised for business intelligence (BI), data analytics, and reporting, making them the goto solution for organizations looking to extract insights from massive datasets.

Examples

Amazon Redshift: A fully managed data warehouse with high performance query capabilities.

Google BigQuery: A server-less, highly scalable data warehouse for realtime analytics.

Snowflake: A cloud based data warehouse known for its flexibility, scalability, and ease of use.

Teradata: Renowned for its scalability and parallel processing capabilities.

When to Use Data Warehouses

Data warehouses are ideal when your focus is on data analytics, reporting, and decision making processes. If your application involves processing large datasets and requires complex queries and aggregations, a data warehouse is the right choice.

NoSQL Databases

Document Databases

Document databases, such as MongoDB, store data in flexible, JSON like documents. They are ideal for applications where the data model is dynamic and unstructured, offering adaptability to changing requirements.

Wide Column Stores

Wide column stores, like Cassandra, are designed for high throughput scenarios, particularly in distributed environments. They excel in handling large volumes of data across many servers, making them suitable for applications requiring fast read/write operations.

In Memory Databases

In-memory databases, such as Redis, store data in the system's memory rather than on disk. This results in extremely low latency and high throughput, making them perfect for realtime applications like caching, gaming, or financial trading systems.

When to Use NoSQL Databases

Document Databases: When your application needs flexibility in data modeling and the ability to store nested, complex data structures.

Wide Column Stores: For applications with high write/read throughput requirements, especially in decentralised environments.

InMemory Databases: When rapid data access and low latency responses are critical, such as in realtime analytics or caching.

BTREE VS LSM

  • Choose B-Tree if your application demands fast point lookups and low-latency reads, with fewer writes.
  • Opt for LSM Tree if you need high write throughput with occasional reads, such as in time-series databases or log aggregation systems.

Other Key Considerations in Database Selection

Development Speed

Consider how quickly your team can develop and maintain the database. SQL databases offer predictability with well defined schemas, whereas NoSQL databases provide flexibility but may require more effort in schema design.

Ease of Maintenance

Evaluate the ease of database management, including backups, scaling, and general maintenance tasks. SQL databases often come with mature tools for administration, while NoSQL databases may offer simpler scaling options.

Team Expertise

Assess the skill set of your development team. If your team is more familiar with SQL databases, it might be advantageous to stick with them. Conversely, if your team has experience with NoSQL databases, leveraging that expertise could lead to faster development and deployment.

Hybrid Approaches

Sometimes, the best solution is a hybrid approach, using different databases for different components of your application. This polyglot persistence strategy allows you to leverage the strengths of multiple database technologies.

Scalability and Performance

Scalability is a crucial factor. SQL databases typically scale vertically, while NoSQL databases are designed for horizontal scaling. Performance should be tested and benchmarked based on your specific use case to ensure optimal results.

Security and Compliance

Security and compliance are nonnegotiable in many industries. Evaluate the security features and compliance certifications of the databases you are considering. Some databases are better suited for highly regulated industries due to their robust security frameworks.

Community and Support

A strong and active community can be a lifeline when you encounter challenges. Consider the size and activity level of the community surrounding the database, as well as the availability of commercial support options.

Cost Considerations

Cost is always a factor. Evaluate the total cost of ownership, including licensing fees, hosting costs, and ongoing maintenance expenses. Cloudbased databases often provide flexible pricing models based on actual usage, which can be more costeffective for scaling applications.

Conclusion

Choosing the right database is not a one size fits all decision. It requires careful consideration of your application's specific needs, the nature of your data, and the expertise of your team. Whether you opt for SQL, NewSQL, NoSQL, or a hybrid approach, the key is to align your choice with your longterm goals and be prepared to adapt as your application evolves. Remember, the database landscape is continuously evolving, and staying informed about the latest developments will help you make the best decision for your project.

Give Me 10 Minutes — I’ll Make Kafka Click for You
+

Refer just few people & Get a chance to connect 1:1 with me for career guidance

Welcome to the Kafka Crash Course! Whether you're a beginner or a seasoned engineer, this guide will help you understand Kafka from its basic concepts to its architecture, internals, and real-world applications.

Give yourself only 10 mins and then you will comfortable in Kafka

Let’s dive in!

1 The Basics

What is Kafka?

Apache Kafka is an open-source distributed event streaming platform capable of handling trillions of events per day. Originally developed by LinkedIn, Kafka has become the backbone of real-time data streaming applications. It’s not just a messaging system; it’s a platform for building real-time data pipelines and streaming apps, Kafka is also very popular in microservice world for any async communication

Key Terminology:

  • Topics: Think of topics as categories or feeds to which data records are published. In Kafka, topics are the primary means for organizing and managing data.
  • Producers: Producers are responsible for sending data to Kafka topics. They write data to Kafka in a continuous flow, making it available for consumption.
  • Consumers: Consumers read and process data from Kafka topics. They can consume data individually or as part of a group, allowing for distributed data processing.
  • Brokers: Kafka runs on a cluster of servers called brokers. Each broker is responsible for managing the storage and retrieval of data within the Kafka ecosystem.
  • Partitions: To manage large volumes of data, topics are split into partitions. Each partition can be thought of as a log where records are stored in a sequence. This division enables Kafka to scale horizontally.
  • Replicas: Backups of partitions to prevent data loss

Kafka operates on a publish-subscribe messaging model, where producers publish records to topics, and consumers subscribe to those topics to receive records.

Push/Pull: Producers push data, consumers pull at their own pace.

This decoupled architecture allows for flexible, scalable, and fault-tolerant data handling.

A Cluster has one or more brokers

  • A Kafka cluster is a distributed system composed of multiple machines (brokers). These brokers work together to store, replicate, and distribute messages.

A producer sends messages to a topic

  • A topic is a logical grouping of related messages. Producers send messages to specific topics. For example, a "user-activity" topic could store information about user actions on a website.

A Consumer Subscribes to a topic

  • Consumers subscribe to topics to receive messages. They can subscribe to one or more topics.

A Partition has one or more replicas

  • A replica is a copy of a partition stored on a different broker. This redundancy ensures data durability and availability.

Each Record consists of a KEY, a VALUE and a TIMESTAMP

  • A record is the basic unit of data in Kafka. It consists of a key, a value, and a timestamp. The key is used for partitioning and ordering messages, while the value contains the actual data. The timestamp is used for ordering and retention policies.

A Broker has zero or one replica per partition

  • Each broker stores at most one replica of a partition. This ensures that the data is distributed evenly across the cluster.

A topic is replicated to one or more partitions

  • To improve fault tolerance and performance, Kafka partitions a topic into smaller segments called partitions. Each partition is replicated across multiple brokers. This ensures that data is not lost if a broker fail

A consumer is a member of a CONSUMER GROUP

  • Consumers are grouped into consumer groups. This allows multiple consumers to share the workload of processing messages from a topic. Each consumer group can only have one consumer per partition.

A Partition has one consumer per group

  • To ensure that each message is processed only once, Kafka assigns only one consumer from a consumer group to each partition.

An OFFSET is the number assigned to a record in a partition

  • The offset is a unique identifier for a record within a partition. Consumers use offsets to keep track of their progress and avoid processing the same message multiple times.

A Kafka Cluster maintains a PARTITIONED LOG

  • Kafka stores messages in a partitioned log. This log is distributed across the brokers in the cluster and is highly durable and scalable

2. 🛠️ Kafka Architecture

Kafka Producer

Producers: Producers are responsible for sending data to Kafka topics. They write data to Kafka in a continuous flow, making it available for consumption.

Producer Workflow:

  1. Create Producer Instance: The producer client is initialized, providing necessary configuration parameters like bootstrap servers, topic name, and serialization format.
  2. Produce Message: The producer creates a message object, setting the key and value.
  3. Send Message: The producer sends the message to the Kafka cluster, specifying the topic and optionally the partition.
  4. Handle Acknowledgements: The producer can configure the level of acknowledgement required from the broker nodes. This can range from none to all replicas, affecting reliability and performance.

Consumers: Consumers read and process data from Kafka topics. They can consume data individually or as part of a group, allowing for distributed data processing.

Consumer Workflow:

  1. Create Consumer Instance: The consumer client is initialized, providing necessary configuration parameters like bootstrap servers, group ID, topic subscriptions, and offset management strategy.
  2. Subscribe to Topics: The consumer subscribes to the desired topics.
  3. Consume Messages: The consumer receives messages from the Kafka cluster, processing them as they arrive.
  4. Commit Offsets: The consumer commits the offsets of the messages it has processed to ensure that it doesn't consume the same messages again in case of restarts or failures.

Kafka Clusters:

At the heart of Kafka is its cluster architecture. A Kafka cluster consists of multiple brokers, each of which manages one or more partitions of a topic. This distributed nature allows Kafka to achieve high availability and scalability. When data is produced, it is distributed across these brokers, ensuring that no single point of failure exists.

Topic Partitioning:

Partitioning is Kafka's secret sauce for scalability and high throughput. By splitting a topic into multiple partitions, Kafka allows for parallel processing of data. Each partition can be stored on a different broker, and consumers can read from multiple partitions simultaneously, significantly increasing the speed and efficiency of data processing.

Replication and Fault Tolerance:

To ensure data reliability, Kafka implements replication. Each partition is replicated across multiple brokers, and one of these replicas acts as the leader. The leader handles all reads and writes for that partition, while the followers replicate the data. If the leader fails, a follower automatically takes over, ensuring uninterrupted service.

Zookeeper’s Role:

Zookeeper is an integral part of Kafka’s architecture. It keeps track of the Kafka brokers, topics, partitions, and their states. Zookeeper also helps in leader election for partitions and manages configuration settings. Though Kafka has been moving towards replacing Zookeeper with its own internal quorum-based system, Zookeeper remains a key component in many Kafka deployments today.

3. Kafka Internals: Peeking Under the Hood

Log-based Storage:

Kafka’s data storage model is log-based, meaning it stores records in a continuous sequence in a log file. Each partition in Kafka corresponds to a single log, and records are appended to the end of this log. This design allows Kafka to provide high throughput with minimal latency. Kafka’s use of a write-ahead log ensures that data is reliably stored before being made available to consumers.

Kafka Delivery Semantic

Offset Management:
Offsets are an essential part of Kafka’s operation. Each record in a partition is assigned a unique offset, which acts as an identifier for that record. Consumers use offsets to keep track of which records have been processed. Kafka allows consumers to commit offsets, enabling them to resume processing from the last committed offset in case of a failure.

Retention Policies:
Kafka provides flexible retention policies that dictate how long data is kept in a topic before being deleted or compacted. By default, Kafka retains data for a set period, after which it is automatically purged. However, Kafka also supports log compaction, where older records with the same key are compacted to keep only the latest version, saving space while preserving important data.

Compaction:
Log compaction is a Kafka feature that ensures that the latest state of a record is retained while older versions are deleted. This is particularly useful for use cases where only the most recent data is relevant, such as in maintaining the current state of a key-value store. Compaction happens asynchronously, allowing Kafka to handle high write loads while maintaining data efficiency.

4. Real-World Applications of Kafka

Real-Time Analytics:
One of Kafka’s most common use cases is in real-time analytics. Companies use Kafka to collect and analyse data as it’s generated, enabling them to react to events as they happen. For example, Kafka can be used to monitor server logs in real time, allowing teams to detect and respond to issues before they escalate.

Event Sourcing:
Kafka is also a powerful tool for event sourcing, a pattern where changes to the state of an application are logged as a series of events. This approach is beneficial for building applications that require a reliable audit trail. By using Kafka as an event store, developers can replay events to reconstruct the state of an application at any point in time.

Microservices Communication:
Kafka’s ability to handle high-throughput, low-latency communication makes it ideal for micro services architectures. Instead of services communicating directly with each other, they can publish and consume events through Kafka. This decoupling reduces dependencies and makes the system more resilient to failures.

Data Integration:
Kafka serves as a central hub for data integration, enabling seamless movement of data between different systems. Whether you’re ingesting data from databases, sensors, or other sources, Kafka can stream that data to data warehouses, machine learning models, or real-time dashboards. This capability is invaluable for building data-driven applications that require consistent and reliable data flow.

5. Kafka Connect

  • Data Integration Framework: Kafka Connect is a tool for streaming data between Kafka and external systems like databases, message queues, or file systems.
  • Source and Sink Connectors: It provides Source Connectors to pull data from systems into Kafka and Sink Connectors to push data from Kafka to external systems.
  • Scalability and Distributed: Kafka Connect is distributed and can be scaled across multiple workers, providing fault tolerance and high availability.
  • Schema Management: Kafka Connect supports schema management with Confluent Schema Registry, ensuring consistency in data formats across different systems.
  • Configuration Driven: Kafka Connect allows easy configuration of connectors through JSON or properties files, requiring minimal coding effort.
  • Single or Distributed Mode: Kafka Connect can run in standalone mode for small setups or distributed mode for larger, more complex environments.

Conclusion

By now, you should have a solid understanding of Kafka, from the basics to the intricacies of its architecture and internals. Kafka is a versatile tool that can be applied to various real-world scenarios, from real-time analytics to event-driven architectures. Whether you’re planning to integrate Kafka into your existing systems or build something entirely new, this crash course equips you with the knowledge to harness Kafka’s full potential.

LEARN Microservice : Zero to Hero in 10 Mins
+

Welcome to the 143 new who have joined us since last edition!

If you aren’t subscribed yet, join smart, curious folks by subscribing below.

Thanks for reading Rocky’s Newsletter ! Subscribe for free to receive new posts and support my work

Refer just few people & Get a chance to connect 1:1 with me for career guidance

Welcome to the Microservice Crash Course! Whether you're a beginner or a seasoned engineer, this guide will help you understand Micro services from its basic concepts to its architecture, Best practices, and real-world applications.

Introduction to Microservices

Ever wonder how tech giants like Netflix and Amazon manage to run their massive platforms so smoothly? The secret is micro services! This allows them to scale quickly, make changes without disrupting the entire platform, and deliver seamless experiences to millions of users. Micro services are the architecture behind the success of some of the most popular services we use daily!

What are Micro services?

Imagine a complex application like a car. Instead of building the entire car as one big unit, we can break it down into smaller, independent components like the engine, wheels, and brakes. Each component has its own function and can be developed, tested, and replaced separately. This approach is similar to micro services architecture.

Micro services is an architectural style where an application is built as a collection of small, independent services. Each service is responsible for a specific part of the application, such as user management, product inventory, or payment processing. These services communicate with each other through APIs (usually over the network), but they are developed, deployed, and managed separately.

In simpler terms, instead of building one large application, microservices break it down into smaller, manageable pieces that work together.

Benefits of Micro services

  1. Increased Agility: Micro services allow teams to develop, test, and deploy services independently, speeding up the release cycle and enabling more frequent updates and improvements.
  2. Scalability: Individual components can be scaled independently, allowing for more efficient use of resources and improving application performance during varying loads.
  3. Resilience: Failure in one service doesn’t necessarily bring down the entire system, as services are isolated and can be designed to handle failures gracefully.
  4. Technological Diversity: Teams can choose the best technology stack for each service based on its specific requirements, rather than being locked into a single technology for the entire application.
  5. Deployment Flexibility: Micro services can be deployed across multiple servers or cloud environments to enhance availability and reduce latency for endusers.
  6. Easier Maintenance and Understanding: Smaller codebases and service scopes make it easier for new developers to understand and for teams to maintain and update code.
  7. Improved Fault Isolation: Issues can be isolated and addressed in specific services without impacting the functionality of others, leading to more stable and reliable applications.
  8. Optimised for Continuous Delivery and Deployment: Micro services fit well with CI/CD practices, enabling automated testing and deployment, which further accelerates development cycles and reduces risk.
  9. Decentralised Governance: Teams have more autonomy over the services they manage, allowing for faster decision making and innovation.
  10. Efficient Resource Utilisation: Services can be deployed in containers that utilise system resources more efficiently, leading to cost savings in infrastructure.

Components required to build microservice architecture

Lets try to understand the components which are required to build the microservice architecture

1.Containerisation: Start with understanding containers, which package code and dependencies for consistent deployment.
2. Container Orchestration: Learn container orchestration tools for efficient management, scaling, and networking of containers.
3. Load Balancing: Explore load balancers to distribute network or app traffic across servers for scalability and reliability.
4. Monitoring and Alerting: Implement monitoring solutions to track application functionality, performance, and communication.
5. Distributed Tracing: Understand distributed tracing tools to debug and trace requests across micro services.
6. Message Brokers: Learn how message brokers facilitate communication between applications, systems, and services.
7. Databases: Explore data storage techniques to persist data needed for further processes or reporting.
8. Caching: Implement caching to reduce latency in microservice communication.

9. Cloud Service Providers: Familiarise yourself with third-party cloud services for infrastructure, application, and storage needs.
10. API Management: Dive into API design, publishing, documentation, and security in a secure environment.
11. Application Gateway: Understand application gateways for network security and filtering of incoming traffic.
12. Service Registry: Learn about service registries to track available instances of each microservice.

Microservice Lifecycle: From Development to Production

In a microservice architecture, the development, deployment, and management of services are key components of ensuring the reliability, scalability, and performance of the overall system. This approach to software development emphasises breaking down complex applications into smaller, independently deployable services, each responsible for specific business functions.

However, to effectively implement a microservice architecture, a structured workflow encompassing pre-production and production stages is essential.

Pre-Production Steps:

1. Development : Developers write and test code for micro services and test them in their development environments.

2. Configuration Management : Configuration settings for micro services are adjusted and tested alongside development.

3. CI/CD Setup : Continuous Integration/Continuous Deployment pipelines are configured to automate testing, building, and deployment processes.

4. Pre-Deployment Checks : A pre-deployment step is introduced to ensure that necessary checks or tasks are completed before deploying changes to production. This may include automated tests, code quality checks, or security scans.

Production Steps:

1. Deployment : Changes are deployed to production using CI/CD pipelines.

2. Load Balancer Configuration : Load balancers are configured to distribute incoming traffic across multiple instances of micro services.

3. CDN Integration : CDN integration is set up to cache static content and improve content delivery performance.

4. API Gateway Configuration : API gateway is configured to manage and secure access to microservices.

5. Caching Setup : Caching mechanisms are implemented to store frequently accessed data and reduce latency.

6. Messaging System Configuration : Messaging systems are configured for asynchronous communication between micro services.

7. Monitoring Implementation : Monitoring tools are set up to monitor the health, performance, and behaviour of micro services in real-time.

8. Object Store Integration : Integration with object stores is established to store and retrieve large volumes of unstructured data efficiently.

9. Wide Column Store or Linked Data Integration : Integration with databases optimised for storing large amounts of semi-structured or unstructured data is set up.

By following these structured steps, organisations can effectively manage the development, deployment, and maintenance of micro services, ensuring they meet quality standards, performance requirements, and business objectives, can you please add your comments if i have missed ?

Best Practices for Microservice Architecture

Here are some best practices:
Single Responsibility: Each microservice should have one purpose, making it easier to manage.
Separate Data Store: Isolate data storage per microservice to avoid cross-service impact.
Asynchronous Communication: Use patterns like message queues to decouple services.
Containerisation: Package micro services with Docker for consistency and scalability.
Orchestration: Use Kubernetes for load balancing and monitoring.
Build and Deploy Separation: Keep these processes distinct to ensure smooth deployments.
Domain-Driven Design (DDD): Define micro services around specific business capabilities.
Stateless Services: Keep services stateless for easier scaling.

Micro Frontends: Break down UIs into independently deployable components.
Additional practices include robust Monitoring and Observability, Security, Automated Testing, Versioning, and thorough Documentation.

Conclusion :

Just like Netflix and Amazon, many of the world’s most popular companies rely on micro services to stay ahead in the fast-moving tech world. With the ability to scale effortlessly, update faster, and improve system reliability, microservices have become the go-to architecture for building modern, high-performance applications. Embrace micro services, and you’re not just keeping up with the trends—you’re building a system that can handle anything the future throws at it!

Master these 8 Powerful Data Structure to Ace your Interview
+

Outline

1. Introduction

- Importance of mastering data structures in tech

- Overview of the 8 essential data structures

2. B-Tree: Your Go-To for Organising and Searching Massive Datasets

- What is a B-Tree?

- How B-Trees work

- Real-world analogy: A library’s catalog system

- Impact of B-Trees on databases and file systems

3. Hash Table: The Champion of Lightning-Fast Data Retrieval

- What is a Hash Table?

- Key-value pair structure

- Real-world analogy: A well-organized filing cabinet

- Applications in caching, symbol tables, and databases

4. Trie: Master of Handling Dynamic Data and Hierarchical Structures

- What is a Trie?

- Structure and function of Tries

- Real-world analogy: A language dictionary

- Uses in autocomplete features and prefix-based searches

5. Bloom Filter: The Space-Saving Detective of the Data World

- What is a Bloom Filter?

- How Bloom Filters work

- Real-world analogy: A detective’s quick decision-making process

- Applications in spell check, caching, and network routers

6. Inverted Index: The Secret Weapon of Search Engines

- What is an Inverted Index?

- How Inverted Indexes function

- Real-world analogy: An index in the back of a book

- Role in information retrieval systems and search engines

7. Skip List: The Versatile Champion of Fast Searching, Insertion, and Deletion

- What is a Skip List?

- How Skip Lists improve performance

- Real-world analogy: A well-designed game strategy

- Uses in in-memory databases and priority queues

8. Log-Structured Merge (LSM) Tree: The Write-Intensive Workload Warrior

- What is an LSM Tree?

- Structure and benefits of LSM Trees

- Real-world analogy: Optimising a high-traffic intersection

- Applications in key-value stores and distributed databases

9. SSTable (Sorted String Table): The Persistent Storage Superhero

- What is an SSTable?

- How SSTables enhance data storage

- Real-world analogy: Organising books by title in a library

- Uses in distributed environments like Apache Cassandra

10. Conclusion

- Recap of the importance of these data structures

- Encouragement to explore, innovate, and conquer tech challenges

11. FAQs

- What is the most important data structure to learn first?

- How do B-Trees differ from Binary Trees?

- Why are Hash Tables so efficient?

- Where are Bloom Filters commonly used?

- How does mastering these data structures impact career growth?

Introduction

In the fast-paced world of technology, understanding data structures is like having a secret weapon up your sleeve. Whether you're tackling complex coding challenges, Optimising system performance, or designing scalable applications, mastering key data structures can make all the difference. Today, we’re diving into eight essential data structures that every tech professional should know. Each of these structures has its own unique strengths, and when used correctly, they can help you conquer any tech challenge that comes your way.

B-Tree: Your Go-To for Organising and Searching Massive Datasets

What is a B-Tree?

A B-Tree is a self-balancing tree data structure that maintains sorted data and allows for efficient insertion, deletion, and search operations. It’s particularly useful for Organising large datasets in databases and file systems.

How B-Trees Work

B-Trees work by keeping data sorted and balanced across multiple levels of nodes. Each node contains a range of keys and can have multiple child nodes, which helps in maintaining a balanced structure. This ensures that operations like search, insert, and delete are performed efficiently, even with large datasets.

Real-World Analogy: A Library’s Catalog System

Imagine walking into a library with thousands of books. Without a catalog system, finding a specific book would be a nightmare. A B-Tree is like that catalog system, Organising books (or data) in such a way that you can quickly locate what you need.

Impact of B-Trees on Databases and File Systems

B-Trees are foundational for systems that require rapid data retrieval and insertion, such as databases and file systems. They are designed to minimise disk reads and writes, making them ideal for storage systems handling large volumes of information.

Hash Table: The Champion of Lightning-Fast Data Retrieval

What is a Hash Table?

A Hash Table is a data structure that maps keys to values using a hash function. This function takes an input (the key) and returns a unique index in an array where the corresponding value is stored.

Key-Value Pair Structure

The beauty of Hash Tables lies in their simplicity. You can think of them as a well-organised filing cabinet where each file (value) is labeled with a unique identifier (key). This allows for lightning-fast retrieval of information.

Real-World Analogy: A Well-Organised Filing Cabinet

Picture a filing cabinet with labeled folders. When you need a document, you simply look for the label, open the folder, and there it is. Hash Tables work the same way, ensuring quick and efficient access to your data.

Applications in Caching, Symbol Tables, and Databases

Hash Tables are widely used in applications that require fast lookups, such as caching, symbol tables, and databases. Their ability to provide constant-time data retrieval makes them indispensable in many systems.

Trie: Master of Handling Dynamic Data and Hierarchical Structures

What is a Trie?

A Trie, also known as a prefix tree, is a specialised data structure used to store a dynamic set of strings. It’s particularly effective for tasks like autocomplete, spell check, and searching for words with a common prefix.

Structure and Function of Tries

Tries organise data hierarchically, with each node representing a character in a string. The structure allows for efficient insertion and search operations, especially when dealing with large datasets of strings.

Real-World Analogy: A Language Dictionary

Think of a Trie as a language dictionary. When you look up a word, you start with the first letter, then the second, and so on, until you find the word you need. This hierarchical approach makes it easy to handle dynamic data.

Uses in Autocomplete Features and Prefix-Based Searches

Tries are the backbone of many autocomplete systems. By efficiently managing dynamic data, they enable quick and accurate suggestions as users type, enhancing the user experience in applications.

Bloom Filter: The Space-Saving Detective of the Data World

What is a Bloom Filter?

A Bloom Filter is a probabilistic data structure that efficiently tests whether an element is part of a set. While it may occasionally give false positives, it never gives false negatives, making it useful for applications where memory space is limited.

How Bloom Filters Work

Bloom Filters use multiple hash functions to map elements to a bit array. When checking if an element is in the set, the filter looks at the corresponding bits. If all bits are set to 1, the element might be in the set; if not, it definitely isn’t.

Real-World Analogy: A Detective’s Quick Decision-Making Process

Imagine a detective making quick decisions based on limited evidence. A Bloom Filter works similarly, quickly determining if something is likely present without needing to be 100% sure.

Applications in Spell Check, Caching, and Network Routers

Bloom Filters are perfect for applications like spell check, where quick membership tests are needed without using much memory. They’re also used in caching systems and network routers for efficient data management.

Inverted Index: The Secret Weapon of Search Engines

What is an Inverted Index?

An Inverted Index is a data structure that maps words to their locations in a document or a set of documents. It’s the backbone of search engines, enabling fast and accurate full-text searches.

How Inverted Indexes Function

Inverted Indexes work by creating a list of words and their associated documents. When you search for a word, the index quickly retrieves the documents that contain it, allowing for fast information retrieval.

Real-World Analogy: An Index in the Back of a Book

Think of an Inverted Index like the index at the back of a book. Instead of reading the whole book to find a topic, you simply look it up in the index and go straight to the relevant pages.

Role in Information Retrieval Systems and Search Engines

Inverted Indexes are critical for search engines like Google, where they enable lightning-fast searches across billions of web pages. Without them, finding information quickly and accurately would be impossible.

Skip List: The Versatile Champion of Fast Searching, Insertion, and Deletion

What is a Skip List?

A Skip List is a data structure that allows for fast search, insertion, and deletion operations by maintaining multiple layers of linked lists. It’s a versatile alternative to balanced trees, offering similar performance with less complexity.

How Skip Lists Improve Performance

Skip Lists use a hierarchy of linked lists to skip over large portions of data, reducing the time it takes to find an element. This makes them faster than traditional linked lists while maintaining simplicity.

Real-World Analogy: A Well-Designed Game Strategy

Imagine playing a game where you can skip certain levels if you have the right strategy. Skip Lists do the same, allowing you to skip over unnecessary data to get to what you need faster.

Uses in In-Memory Databases and Priority Queues

Skip Lists are commonly used in in-memory databases and priority queues, where they balance simplicity and efficiency. Their ability to handle dynamic datasets makes them a popular choice for many applications.

Log-Structured Merge (LSM) Tree: The Write-Intensive Workload Warrior

What is an LSM Tree?

A Log-Structured Merge (LSM) Tree is a data structure designed for write-heavy workloads. It optimises data storage by writing sequentially to disk and periodically merging data to maintain efficiency.

Structure and Benefits of LSM Trees

LSM Trees store data in levels, with newer data at the top. As data accumulates, it’s periodically merged and compacted, ensuring that reads remain fast even as the dataset grows.

Real-World Analogy: Optimising a High-Traffic Intersection

Think of an LSM Tree like a high-traffic intersection that’s optimised to handle heavy loads efficiently. By managing the flow of data carefully, it ensures that performance remains high, even under pressure.

Applications in Key-Value Stores and Distributed Databases

LSM Trees are ideal for key-value stores and distributed databases where write operations dominate. Their ability to handle large volumes of writes without sacrificing read performance makes them essential for modern data storage systems.

SSTable (Sorted String Table): The Persistent Storage Superhero

What is an SSTable?

An SSTable is a persistent, immutable data structure used for storing large datasets. It’s sorted and optimized for quick reads and writes, making it a key component in distributed systems like Apache Cassandra.

How SSTables Enhance Data Storage

SSTables store data in a sorted order, which allows for fast sequential reads and efficient use of storage space. They are immutable, meaning once data is written, it cannot be changed, ensuring consistency and reliability.

Real-World Analogy: Organising Books by Title in a Library

Imagine a library where all the books are sorted by title. When you need a book, you can quickly find it because everything is in order. SSTables work similarly, ensuring that data is always easy to find and retrieve.

Uses in Distributed Environments Like Apache Cassandra

SSTables are crucial for distributed environments where data consistency and speed are paramount. In systems like Apache Cassandra, they provide the backbone for scalable and reliable data storage.

Entity Framework

+
Automatic and Manual Migrations in EF?
+
Automatic Migrations update database schema automatically; Manual Migrations require explicit creation of migration files.
Code First Migrations?
+
Migrations help incrementally update the database schema as the model changes, preserving existing data.
Code-First approach in EF?
+
Code-First approach allows creating domain classes first, and EF generates the database schema based on the classes.
Database-First approach in EF?
+
Database-First approach generates the EF model from an existing database.
DbContext and ObjectContext?
+
DbContext is a lightweight EF context for querying and saving data; ObjectContext is a more feature-rich context used in older EF versions.
DbContext.Database.ExecuteSqlRaw()?
+
ExecuteSqlRaw() executes raw SQL commands against the database directly.
DbContext?
+
DbContext is the primary EF class for querying, saving data, and managing entity objects.
DbSet in EF?
+
DbSet represents a collection of entities in the context and allows querying and saving operations.
DbSet?
+
DbSet represents a table or collection of entities in DbContext and provides LINQ query capabilities.
DifBet Add(), Attach(), and Update() in EF?
+
Add() marks for insert; Attach() attaches existing entity; Update() marks entity as modified for update.
DifBet AsNoTracking() and default tracking queries?
+
AsNoTracking() improves performance for read-only queries by not tracking; default tracking tracks entities for changes.
DifBet Code-First Data Annotations and Fluent API?
+
Data Annotations decorate classes and properties with attributes; Fluent API provides configuration using method calls in DbContext OnModelCreating.
DifBet Database.EnsureCreated() and Database.Migrate() in EF Core?
+
EnsureCreated() creates database if it does not exist, bypassing migrations; Migrate() applies pending migrations.
DifBet DbContext and ObjectContext?
+
DbContext is simpler, lightweight, and recommended; ObjectContext is more verbose and low-level.
DifBet DbContext.Entry() and DbSet.Update()?
+
Entry() allows setting entity state explicitly; Update() marks entity as Modified for saving.
DifBet DbContext.SaveChanges() and DbContext.SaveChangesAsync()?
+
SaveChanges() is synchronous; SaveChangesAsync() is asynchronous and non-blocking.
DifBet DbSet.Attach() and DbSet.Add()?
+
Attach() attaches an existing entity to the context without marking as Added; Add() marks the entity as Added for insertion.
DifBet DbSet.Remove() and DbContext.Entry().State = EntityState.Deleted?
+
Remove() marks entity for deletion; setting State to Deleted explicitly marks entity for deletion.
DifBet eager loading and projection in EF Core?
+
Eager loading retrieves full related entities; projection selects only required fields.
DifBet eager loading with Include() and projection with Select()?
+
Include() loads full entity and related data; Select() projects only specific fields, improving performance.
DifBet EF Core and EF6 performance-wise?
+
EF Core is generally faster and lightweight; EF6 has more features but heavier and Windows-only.
DifBet EF migrations and database seeding?
+
Migrations modify database schema; Seeding populates database with initial or test data.
DifBet EF6 and EF Core?
+
EF Core is cross-platform, lightweight, and modern; EF6 is mature, Windows-only, and full-featured.
DifBet Entity and Complex Type in EF?
+
Entity has a key and can be tracked; Complex Type has no key and is used as a property inside an entity.
DifBet Find() and FirstOrDefault() in EF?
+
Find() searches by primary key and may return cached entity; FirstOrDefault() executes query on database and returns first match or default.
DifBet foreign key and navigation property in EF?
+
Foreign key holds the key value; navigation property allows navigation to related entity.
DifBet FromSqlRaw() and FromSqlInterpolated()?
+
FromSqlRaw() executes raw SQL; FromSqlInterpolated() allows parameterized queries to prevent SQL injection.
DifBet Include() and ThenInclude() in EF Core?
+
Include() loads related entity; ThenInclude() loads nested related entities after Include().
DifBet IQueryable and IEnumerable in EF?
+
IQueryable executes queries on the database and supports deferred execution; IEnumerable executes in memory after fetching data.
DifBet lazy loading and eager loading in EF?
+
Lazy loads on demand; eager loads related data immediately with Include().
DifBet lazy loading and explicit loading?
+
Lazy loading loads when navigation property accessed; explicit loading requires manual Load() call.
DifBet lazy loading proxies and manual lazy loading?
+
Lazy loading proxies automatically intercept navigation properties; manual lazy loading requires explicit Load() calls.
DifBet Lazy Loading, Eager Loading, and Explicit Loading?
+
Lazy Loading loads on demand; Eager Loading loads with the initial query; Explicit Loading loads manually when needed.
DifBet LINQ to Entities and LINQ to Objects in EF?
+
LINQ to Entities translates queries to SQL for database; LINQ to Objects operates on in-memory objects.
DifBet POCO and EntityObject?
+
POCO (Plain Old CLR Object) is a simple class without EF dependency; EntityObject derives from EF base classes and is tightly coupled with EF.
DifBet RowVersion/ConcurrencyToken and Timestamp in EF?
+
Both are used for optimistic concurrency; Timestamp is SQL Server-specific byte array, ConcurrencyToken can be any property marked for concurrency.
DifBet SaveChanges() and SaveChangesAsync()?
+
SaveChanges() is synchronous; SaveChangesAsync() is asynchronous and non-blocking.
DifBet SingleOrDefault() and FirstOrDefault() in EF?
+
SingleOrDefault() expects exactly one match and throws if multiple; FirstOrDefault() returns the first match without error if multiple.
DifBet TPH and TPT inheritance in EF?
+
TPH uses one table for all types; TPT uses separate table for each type.
DifBet TPH, TPT, and Table-per-Concrete Class inheritance?
+
TPH stores all types in one table; TPT stores each type in separate table; Table-per-Concrete stores each concrete class in its own table.
DiffBet EF and ADO.NET?
+
EF abstracts SQL into objects (ORM), while ADO.NET requires manual SQL queries and dataset manipulation.
Different approaches of Entity Framework?
+
Database-First, Model-First, and Code-First approaches.
Eager Loading in EF?
+
Eager Loading retrieves related data along with the main entity using Include() method.
Eager loading?
+
EF loads related entities immediately using Include() to reduce multiple database calls.
EF Core async operations?
+
EF Core supports async versions of query and save methods, improving scalability and non-blocking I/O.
EF Core batch operations?
+
Batch operations execute multiple insert, update, or delete commands in a single database round-trip.
EF Core cascade delete?
+
Cascade delete automatically deletes dependent entities when principal entity is deleted.
EF Core Change Tracker?
+
Change Tracker keeps track of entity changes in the context for insert, update, and delete operations.
EF Core concurrency handling?
+
EF Core uses concurrency tokens or timestamps to detect conflicting updates and prevent data loss.
EF Core connection pooling?
+
Connection pooling reuses database connections for performance optimization.
EF Core database seeding?
+
Seeding populates database with initial or test data during migrations or startup.
EF Core DbContext pooling?
+
DbContext pooling reuses context instances to reduce memory allocation and improve performance in high-load applications.
EF Core global query filter?
+
Global query filter applies conditions automatically to all queries for a given entity type.
EF Core global query filter?
+
Applies automatic conditions to queries for a given entity type, like soft delete filters.
EF Core migrations rollback?
+
Migrations rollback allows reverting database schema to previous state using Remove-Migration or Update-Database commands.
EF Core owned entity?
+
Owned entity is a dependent entity type whose lifecycle is tied to the owner and shares the same table.
EF Core owned types vs complex types?
+
Owned types are dependent entities with lifecycle tied to owner; complex types in EF6 were similar but without EF Core features.
EF Core query types (keyless entity)?
+
Keyless entities represent database views or tables without primary keys, used for read-only queries.
EF Core shadow key?
+
A key property not defined in CLR class but maintained in EF model for relationships.
EF Core shadow property?
+
Shadow property is maintained by EF for relational mapping but not in CLR class.
EF Core shadow property?
+
Property not defined in class but maintained in EF Core model for mapping or foreign keys.
EF Core table splitting?
+
Table splitting stores multiple entity types in the same database table.
EF Core tracking vs no-tracking queries?
+
Tracking queries track changes for update; no-tracking queries improve read performance without change tracking.
EF Core value conversion?
+
Value conversion transforms property values between CLR type and database type during read/write operations.
Entity Framework?
+
Entity Framework (EF) is an Object-Relational Mapping (ORM) framework for .NET that allows developers to work with databases using .NET objects.
Entity Framework?
+
EF is an ORM for .NET that maps database tables to C# classes, enabling developers to work with data as objects without writing SQL.
Execute raw SQL in EF?
+
Use context.Database.SqlQuery() for queries or context.Database.ExecuteSqlCommand() for commands.
Explicit Loading in EF?
+
Explicit Loading loads related data manually using Load() method on navigation properties.
Foreign key property in EF?
+
Foreign key property stores the key of a related entity to define relationships.
Keyless entity type in EF Core?
+
Keyless entity type does not have a primary key and is used for read-only queries like views.
Lazy Loading in EF?
+
Lazy Loading delays loading of related data until it is accessed for the first time.
Lazy loading?
+
EF loads related entities only when accessed, improving performance for large datasets.
Migration in EF Code-First?
+
Migration is a feature that allows updating the database schema incrementally when the model changes.
Model-First approach in EF?
+
Model-First approach allows creating the EF model visually, and EF generates the database schema from it.
Navigation properties?
+
Properties in entities used to represent relationships between tables (one-to-one, one-to-many, many-to-many).
Navigation property in EF?
+
Navigation property represents a relationship between two entities, allowing navigation from one entity to another.
No-Tracking query in EF?
+
No-Tracking query does not track changes to entities and improves performance for read-only operations using AsNoTracking().
Optimistic concurrency in EF?
+
Optimistic concurrency allows multiple users to work on data and checks for conflicts when saving changes.
Owned entity type in EF Core?
+
Owned entity type shares the same table with owner entity and cannot exist independently.
Shadow property in EF Core?
+
A property maintained by EF model but not defined in CLR class, used for tracking or foreign keys.
Shadow property in EF?
+
Shadow property is a property in EF model not defined in CLR class but maintained in the EF model and database.
Tracking query in EF?
+
A tracking query tracks changes to entities retrieved from the database so that changes can be persisted back.
Types of EF approaches?
+
Database First: Generates classes from an existing DB, Model First: Create model, then generate DB, Code First: Classes define schema, DB generated automatically

Everything About Devops

+
Q1) what is DevOps
+
By the name DevOps, it’s very clear that it’s acollaboration of Development as well as Operations. But one should know thatDevOps is not a tool, or a software or framework, DevOps is a Combination ofTools which helps for the automation of whole infrastructure. DevOps is basically and implementation of Agile methodologyon Development side as well as Operations side.
Q2) why do we need DevOps
+
To fulfil the need of delivering more and faster and betterapplication to meet more and more demands of users, we need DevOps. DevOps helpsdeployment to happen really fast compared to any other traditional tools.
Q3) Mention the key aspects or principle behindDevOps
+
The key aspects or principle behind DevOps is : Infrastructure as a Code Continuous Integration ContinuousDeployment Automation Continuous Monitoring Security
Q4) Lis t out some of the popular tools for DevOps
+
Git Jenkins Ansible Puppet NagiosDocker ELK (Elasticsearch, Logstash, Kibana)
Q5) what is a version controlsystem
+
Version ControlSystem (VCS) is a software that helpssoftware developers to work together and maintain a complete his tory of theirwork. Some of the feature of VCS as follows: Allow developers towok simultaneously Does not allow overwriting on each other changes. Maintainthe his tory of every version. There are two types of Version ControlSystems: Central Version ControlSystem, Ex: Git, Bitbucket Dis tributed/Decentralized Version ControlSystem, Ex:SVN
Q6) What is Git and explain the difference between Git andSVN
+
Git is a source code management (SCM) toolwhich handlessmall as well as large projects with efficiency. It is basically used to storeour repositories in remote server such as GitHub. GIT SVN Git is a Decentralized Version ControlTool SVN is a Centralized Version ControlTool Git contains the local repo as well asthe full his tory of the whole project on all the developershard drive, so if there is a server outage , you can easilydo recovery from your team mates local git repo. SVN relies only on the central server tostore all the versions of the project file Push and pull operations are fast Push and pull operations are slowercompared to Git It belongs to 3 rd generation Version ControlTool It belongs to 2 nd generation Version Controltools Client nodes can share the entirerepositories on their local system Version his tory is stored on server-siderepository Commits can be done offline too Commits can be done only online Work are shared automatically bycommit Nothing is shared automatically
Q7) what language is used in Git
+
Git is written in C language, and since its written in Clanguage its very fast and reduces the overhead of runtimes.
Q8) what is SubGit
+
SubGit is a toolfor migrating SVN to Git. It creates awritable Git mirror of a local or remote Subversion repository and uses bothSubversion and Git if you like.
Q9) how can you clone a Git repository via Jenkins
+
First, we must enter the e-mail and user name for yourJenkins system, then switch into your job directory and execute the “gitconfig” command.
Q10)What are the Advantages of Ansible
+
Agentless, it doesn’t require any extrapackage/daemons to be installed Very low overhead Good performance Idempotent Very Easy to learn Declarative not procedural
Q11) what’s the use of Ansible
+
Ansible is mainly used in IT infrastructure to manage ordeploy applications to remote nodes. Let’s say we want to deploy oneapplication in 100’s of nodes by just executing one command, then Ansibleis the one actually coming into the picture but should have some knowledge onAnsible script to understand or execute the same.
Q12) what’s the difference between Ansible Playbookand Roles
+
Roles Playbooks Roles are reusable subsets of aplay. Playbooks contain Plays. A set of tasks for accomplis hing certainrole. Mapps among hosts and roles. Example: common, webservers. Example: site.yml, fooservers.yml,webservers.yml.
Q13) How do I see a lis t of all the ansible_variables
+
Ansible by default gathers “facts” about themachines, and these facts can be accessed in Playbooks and in templates. To seea lis t of all the facts that are available about a machine, you can run the“setup” module as an ad-hoc action: Ansible -m setup hostname This will print out a dictionary of all the facts that areavailable for that particular host.
Q14) what is Docker
+
Docker is a containerization technology that packages yourapplication and all its dependencies together in the form of Containers toensure that your application works seamlessly in any environment.
Q15) what is Docker image
+
Docker image is the source of Docker container. Or in otherwords, Docker images are used to create containers.
Q16) what is Docker Container
+
Docker Container is the running instance of DockerImage.
Q17) Can we consider DevOps as Agile methodology
+
Of Course, we can!! The only difference between agilemethodology and DevOps is that, agile methodology is implemented only fordevelopment section and DevOps implements agility on both development as well asoperations section.
Q18) what are the advantages of using Git
+
Data redundancy and replication High availability Only one. git directory per repository Superior dis k utilization and network performanceCollaboration friendly Git can use any sort of projects.
Q19) what is kernel
+
A kernel is the lowest level of easily replaceable softwarethat interfaces with the hardware in your computer.
Q20) what is difference between grep -i and grep -v
+
I ignore alphabet difference V accept this value ex) ls |grep -i docker Dockerfile docker.tar.gz ls | grep -v docker Desktop Dockerfile DocumentsDownloads You can’t see anything with name docker.tar.gz Q21) How can you define particular space to the file This feature is generally used to give the swap space to theserver. Lets say in below machine I have to create swap space of 1GBthen, dd if=/dev/zero of=/swapfile1 bs=1G count=1
Q22) what is concept of sudo in linux
+
Sudo(superuser do) is a utility for UNIX- and Linux-basedsystems that provides an efficient way to give specific users permis sion to usespecific system commands at the root (most powerful) level of the system.
Q23) what is a Jenkins Pipeline
+
Jenkins Pipeline (or simply “Pipeline”) is asuite of plugins which supports implementing and integrating continuous deliverypipelines into Jenkins.
Q24) How to stop and restart the Docker container
+
To stop the container: docker stop container ID Now to restart the Docker container: docker restartcontainer ID
Q25) What platforms does Docker run on
+
Docker runs on only Linux and Cloud platforms: Ubuntu 12.04 LTS+ Fedora 20+ RHEL 6.5+ CentOS 6+ Gentoo ArchLinux openSUSE 12.3+ CRUX 3.0+ Cloud: Amazon EC2 Google Compute Engine Microsoft Azure Rackspace Note that Docker does not run on Windows or Mac forproduction as there is no support, yes you can use it for testing purpose evenin windows
Q26) what are the tools used for docker networking
+
For docker networking we generally use kubernets and dockerswarm.
Q27) what is docker compose
+
Lets say you want to run multiple docker container, at thattime you have to create the docker compose file and type the commanddocker-compose up. It will run all the containers mentioned in docker composefile.
Q28) What is Scrum
+
Scrum is basically used to divide your complex software andproduct development task into smaller chunks, using iterations and incrementalpractis es. Each iteration is of two weeks. Scrum consis ts of three roles:Product owner, scrum master and Team
Q29) What does the commit object contain
+
Commit object contain the following components: It contains a set of files, representing the state of aproject at a given point of time reference to parent commit objects An SHAI name, a 40-character string that uniquely identifiesthe commit object (also called as hash).
Q30) Explain the difference between git pull and gitfetch
+
Git pull command basically pulls any new changes or commitsfrom a branch from your central repository and updates your target branch inyour local repository. Git fetch is also used for the same purpose, but itsslightly different form Git pull. When you trigger a git fetch, it pulls all newcommits from the desired branch and stores it in a new branch in your localrepository. If we want to reflect these changes in your target branch, git fetchmust be followed with a git merge. Our target branch will only be updated aftermerging the target branch and fetched branch. Just to make it easy for us,remember the equation below: Git pull = git fetch + git merge
Q31) How do we know in Git if a branch has already beenmerged into master
+
git branch –merged The above command lis ts the branches that have been mergedinto the current branch. git branch –no-merged this command lis ts the branches that have not beenmerged.
Q32) What is ‘Staging Area’ or‘Index’ in GIT
+
Before committing a file, it must be formatted and reviewedin an intermediate area known as ‘Staging Area’ or ‘IndexingArea’. #git add
Q33) What is Git Stash
+
Let’s say you’ve been working on part of yourproject, things are in a messy state and you want to switch branches for sometime to work on something else. The problem is , you don’t want to do acommit of your half-done work just, so you can get back to this point later. Theanswer to this is sue is Git stash. Git Stashing takes your working directory that is , yourmodified tracked files and staged changes and saves it on a stack of unfinis hedchanges that you can reapply at any time.
Q34) What is Git stash drop
+
Git ‘stash drop’ command is basically used toremove the stashed item. It will basically remove the last added stash item bydefault, and it can also remove a specific item if you include it as anargument. I have provided an example below: If you want to remove any particular stash item from thelis t of stashed items you can use the below commands: git stash lis t: It will dis play the lis t of stashed items asfollows: stash@{0}: WIP on master: 049d080 added the index filestash@{1}: WIP on master: c265351 Revert “added files” stash@{2}:WIP on master: 13d80a5 added number to log
Q35) What is the function of ‘gitconfig’
+
Git uses our username to associate commits with an identity.The git config command can be used to change our Git configuration, includingyour username. Suppose you want to give a username and email id toassociate commit with an identity so that you can know who has made a commit.For that I will use: git config –global user.name “Your Name”:This command will add your username. git config –global user.email “Your E-mailAddress”: This command will add your email id.
Q36) How can you create a repository in Git
+
To create a repository, you must create a directory for theproject if it does not exis t, then run command “git init”. Byrunning this command .git directory will be created inside the projectdirectory.
Q37) Describe the branching strategies you haveused
+
Generally, they ask this question to understand yourbranching knowledge Feature branching This model keeps all the changes for a feature inside of abranch. When the feature branch is fully tested and validated by automatedtests, the branch is then merged into master. Task branching In this task branching model each task is implemented on itsown branch with the task key included in the branch name. It is quite easy tosee which code implements which task, just look for the task key in the branchname. Release branching Once the develop branch has acquired enough features for arelease, then we can clone that branch to form a Release branch. Creating this release branch starts the next release cycle, so no new features can be addedafter this point, only bug fixes, documentation generation, and otherrelease-oriented tasks should go in this branch. Once it’s ready to ship,the release gets merged into master and then tagged with a version number. Inaddition, it should be merged back into develop branch, which may haveprogressed since the release was initiated earlier.
Q38) What is Jenkins
+
Jenkins is an open source continuous integration toolwhichis written in Java language. It keeps a track on version controlsystem and toinitiate and monitor a build system if any changes occur. It monitors the wholeprocess and provides reports and notifications to alert the concern team.
Q39) What is the difference between Maven, Ant andJenkins
+
Maven and Ant are Build Technologies whereas Jenkins is acontinuous integration(CI/CD) tool.
Q40) Explain what is continuous integration
+
When multiple developers or teams are working on differentsegments of same web application, we need to perform integration test byintegrating all the modules. To do that an automated process for each piece ofcode is performed on daily bases so that all your code gets tested. And this whole process is termed as continuous integration.
Q41) What is the relation between Hudson andJenkins
+
Hudson was the earlier name of current Jenkins. After someis sue faced, the project name was changed from Hudson to Jenkins.
Q42) What are the advantages of Jenkins
+
Advantage of using Jenkins Bug tracking is easy at early stage in developmentenvironment. Provides a very large numbers of plugin support. Iterative improvement to the code, code is basically dividedinto small sprints. Build failures are cached at integration stage. For each code commit changes an automatic build reportnotification get generated. To notify developers about build report success or failure,it can be integrated with LDAP mail server. Achieves continuous integrationagile development and test-driven development environment. With simple steps, maven release project can also beautomated.
Q43) Which SCM tools does Jenkins supports
+
Source code management tools supported by Jenkins arebelow: AccuRev CVS Subversion Git Mercurial Perforce Clearcase RTC
Q44) What is Ansible
+
Ansible is a software configuration management tooltodeploy an application using ssh without any downtime. It is also used formanagement and configuration of software applications. Ansible is developed inPython language.
Q45) How can your setup Jenkins jobs
+
Steps to set up Jenkins job as follows: Select new item from the menu. After that enter a name for the job (it can be anything) andselect free-style job. Then click OK to create new job in Jenkinsdashboard. The next page enables you to configure your job, andit’s done.
Q46) What is your daily activities in your currentrole
+
Working on JIRA Tickets Builds and Deployments Resolving is sues when builds and deployments fails bycoordinating and collaborating with the dev team Infrastructuremaintenance Monitoring health of applications
Q47) What are the challenges you faced in recenttimes
+
I need to implement trending technologies like Docker toautomate the configuration management activities in my project by showingPOC.
Q48) What are the build and deployment failures you got andhow you resolved those
+
I use to get most of the time out of memory is sue. So Ifixed this is sue by restarting the server which is not best practice. I did thepermanent fix by increase the Perm Gen Space and Heap Space.
Q49) I want a file that consis ts of last 10 lines of thesome other file
+
Tail -10 filename >filename
Q50) How to check the exit status of the commands
echo $
+
Q51) I want to get the information from file which consis tsof the word “GangBoard” grep “GangBoard” filename Q52) I want to search the files with the name of“GangBoard” find / -type f -name “*GangBoard*”
Q53) Write a shell script to print only primenumbers
+
prime.sh echo "1" i=3j=300 flag=0 tem=2 echo "1"while [ $i -ne $j ] do temp=`echo $i` while [ $temp -ne $tem ] do temp=`expr $temp - 1` n=`expr $i % $temp` if [ $n -eq 0 -a $flag -eq 0 ] then flag=1 fi done if [ $flag -eq 0 ] then else fi echo $i flag=0 i=`expr $i + 1` done
Q54) How to pass the parameters to the script and how can Iget those parameters
+
Scriptname.sh parameter1 parameter2 I will use $* to get theparameters.
Q55) What is the default file permis sions for the file andhow can I modify it
+
Default file permis sions are : rw-r—r— If I want to change the default file permis sions I need touse umask command ex: umask 666
Q56) How you will do the releases
+
There are some steps to follow. Create a check lis t Create a release branch Bump the version Merge release branch to master & tag it. Use a Pull request to merge the release merge Deploy masterto Prod Environment Merge back into develop & delete release branch Changelog generation Communicating with stack holders Grooming the is suetracker
Q57) How you automate the whole build and releaseprocess
+
Check out a set of source code files. Compile the code and report on progress along the way. Runautomated unit tests against successful compiles. Create an installer. Publis h the installer to a download site, and notify teamsthat the installer is available. Run the installer to create an installedexecutable. Run automated tests against the executable. Report theresults of the tests. Launch a subordinate project to update standard libraries.Promote executables and other files to QA for further testing. Deploy finis hed releases to production environments, such asWeb servers or CD manufacturing. The above process will be done by Jenkins by creating thejobs. Q58) I have 50 jobs in the Jenkins dash board , I want tobuild at a time all the jobs In Jenkins there is a plugin called build after otherprojects build. We can provide job names over there and If one parent job runthen it will automatically run the all other jobs. Or we can use Pipe linejobs.
Q59) How can I integrate all the tools with Jenkins
+
I have to navigate to the manage Jenkins and then globaltoolconfigurations there you have to provide all the details such as Git URL ,Java version, Maven version , Path etc.
Q60) How to install Jenkins via Docker
+
The steps are: Open up a terminal window. Download the jenkinsci/blueocean image & run it as acontainer in Docker using the following docker run command:(https://docs.docker.com/engine/reference/commandline/run/) docker run \ -u root \ –rm \ -d \ -p 8080:8080 \ -p50000:50000 \ -v jenkins-data:/var/jenkins_home \ -v /var/run/docker.sock:/var/run/docker.sock \jenkinsci/blueocean Proceed to the Post-installation setup wizard(https://jenkins.io/doc/book/installing/#setup-wizard) Accessing theJenkins/Blue Ocean Docker container docker exec -it jenkins-blueoceanbash Accessing the Jenkins console log through Docker logsdockerlogs Accessing the Jenkins home directorydockerexec -it bash
Q61) Did you ever participated in Prod DeploymentsIf yeswhat is the procedure
+
Yes I have participated, we need to follow the followingsteps in my point of view Preparation & Planning : What kind of system/technologywas supposed to run on what kind of machine The specifications regarding theclustering of systems How all these stand-alone boxes were going to talk to eachother in a foolproof manner Production setup should be documented to bits. It needs tobe neat, foolproof, and understandable. It should have all a system configurations, IP addresses,system specifications, & installation instructions. It needs to be updatedas & when any change is made to the production environment of thesystem
Q62) My application is not coming up for some reasonHowcan you bring it up
+
We need to follow the steps Network connection The Web Server is not receiving users’s requestChecking the logs Checking the process id’s whether services are runningor not The Application Server is not receiving user’srequest(Check the Application Server Logs and Processes) A network level‘connection reset’ is happening somewhere.
Q63) Did you automate anything in your projectPleaseexplain
+
Yes I have automated couple of things such as Passwordexpiry automation Deleting the older log files Code quality threshold violations etc.
Q64) What is IaCHow you will achieve this
+
Infrastructure as Code (IaC) is the management ofinfrastructure (networks, virtual machines, load balancers, and connectiontopology) in a descriptive model, using the same versioning as DevOps team usesfor source code. This will be achieved by using the tools such as Chef, Puppetand Ansible etc.
Q65) What is multifactor authenticationWhat is the use ofit
+
Multifactor authentication (MFA) is a security system thatrequires more than one method of authentication from independent categories ofcredentials to verify the user’s identity for a login or othertransaction. Security for every enterpris e user — end &privileged users, internal and external Protect across enterpris e resources — cloud &on-prem apps, VPNs, endpoints, servers, privilege elevation and more Reduce cost & complexity with an integrated identityplatform
Q66) I want to copy the artifacts from one location toanother location in cloud. How
+
Create two S3 buckets, one to use as the source, and theother to use as the destination and then create policies.
Q67) How can I modify the commit message in git
+
I have to use following command and enter the requiredmessage. Git commit –amend Q68) How can you avoid the waiting time for the triggeredjobs in Jenkins. First I will check the Slave nodes capacity, If it is fullyloaded then I will add the slave node by doing the following process. Go to the Jenkins dashboard -> Manage Jenkins ->ManageNodes Create the new node a By giving the all required fields and launch the slavemachine as you want.
Q69) What are the Pros and Cons of Ansible
+
Pros: Open Source Agent less Improved efficiency , reduce cost Less Maintenance Easy to understand yaml files Cons: Underdeveloped GUI with limited features Increased focus on orchestration over configurationmanagement SSH communication slows down in scaled environments
Q70) How you handle the merge conflicts in git
+
Follow the steps Create Pull request Modify according to the requirement by sitting withdevelopers Commit the correct file to the branch Merge the current branch with master branch.
Q71) I want to delete 10 days older log files. How canI
+
There is a command in unix to achieve this task find -mtime +10 -name “*.log” -exec rm -f {} \; 2>/dev/null
What is the difference among chef, puppet andansible
+
Chef Puppet Ansible Interoperability Works Only on Linux/Unix Works Only on Linux/Unix Supports Windows but server should beLinux/U Conf. Language It uses Ruby Puppet DSL YAML (Python) Availability Primary Server and Backup Server Multi Master Architecture Single Active Node
Q72) How you get the Inventory variables defined for thehost
+
We need to use the following command Ansible – m debug- a“var=hostvars[‘hostname’]”localhost(10.92.62.215)
Q73) How you will take backup for Jenkins
+
Copy JENKINS_HOME directory and “jobs” directoryto replicate it in another server
Q74) How to deploy docker container to aws
+
Amazon provides the service called Amazon Elastic ContainerService; By using this creating and configuring the task definition and serviceswe will launch the applications.
Q75) I want to change the default port number of apachetomcat. How
+
Go to the tomcat folder and navigate to the conf folderthere you will find a server.xml file. You can change connector port tag as youwant.
Q76) In how many ways you can install the Jenkins
+
We can install Jenkins in 3 Ways By downloading Jenkinsarchive file By running as a service Java –jar Jenkins.war By deploying Jenkins.war to the webapps folder intomcat.
Q77) How you will run Jenkins job from command line
+
We have a Jenkins CLI from there we need to use the curlcommand curl -X POST -u YOUR_USER:YOUR_USER_PASSWORD http://YOUR_JENKINS_URL/job/YOUR_JOB/build
Q78) How you will do tagging in git
+
We have following command to create tags in git Git tagv0.1
Q79) How can you connect a container to a network when itstarts
+
We need to use a following command docker run -itd –network=multi-host-networkbusybox
Q80) How you will do code commit and code deploy incloud
+
Create a deployment environment Get a copy of the samplecode Create your pipeline Activate your pipeline Commit a change and update the App.
Q81) How to access variable names in Ansible
+
Using hostvars method we can access and add the variableslike below {{ hostvars[inventory_hostname][‘ansible_’ + which_interface][‘ipv4’][‘address’] }}
Q82) What is Infrastructure as Code
+
Where the Configuration of any servers or toolchain orapplication stack required for an association can be made into progressivelyelucidating dimension of code and that can be utilized for provis ioning andoverseeing foundation components like Virtual Machine, Software, NetworkElements, however it varies from contents utilizing any language, where they area progression of static advances coded, where Version controlcan be utilized soas to follow condition changes . Precedent Tools are Ansible, Terraform.
Q83) What are the zones the Version controlcan acquaintwith get proficient DevOps practice
+
A clearly fundamental region of Version Controlis Sourcecode the executives, Where each engineer code ought to be pushed to a typicalstorehouse for keeping up assemble and dis charge in CI/CD pipelines. Another territory can be Version controlFor Adminis tratorswhen they use Infrastructure as A Code (IAC) apparatuses and rehearses forkeeping up The Environment setup. Another Area of Version Controlframework Can be ArtifactoryManagement Using Repositories like Nexus and DockerHub
Q84) Why Opensource apparatuses support DevOps
+
Opensource devices dominatingly utilized by any associationwhich is adjusting (or) embraced DevOps pipelines in light of the fact thatdevops accompanied an attention on robotization in different parts ofassociation manufacture and dis charge and change the executives and furthermoreframework the board zones. So creating or utilizing a solitary apparatus is unthinkableand furthermore everything is fundamentally an experimentation period ofadvancement and furthermore coordinated chops down the advantage of building upa solitary device , so opensource devices were accessible available practicallyspares each reason and furthermore gives association a choice to assess thedevice dependent on their need.
Q85) What is the dis tinction among Ansible and chef(or)manikin
+
Ansible is Agentless design the board device, where manikinor gourmet expert needs operator should be kept running on the specialis t huband culinary specialis t or manikin depends on draw demonstrate, where yourcookbook or show for gourmet expert and manikin separately from the ace will bepulled by the operator and ansible uses ssh to convey and it gives informationdriven guidelines to the hubs should be overseen , progressively like RPCexecution, ansible utilizations YAML scripting, though manikin (or) culinaryspecialis t is worked by ruby uses their own DSL .
Q86) What is Jinja2 templating in ansible playbooks andtheir utilization
+
Jinja2 templating is the Python standard for templating ,consider it like a sed editorial manager for Ansible , where it very well may beutilized is when there is a requirement for dynamic change of any config recordto any application like consider mapping a MySQL application to the IP addressof the machine, where it is running, it can’t be static , it needsmodifying it progressively at runtime. Arrangement The vars inside the supports are supplanted by ansible whilerunning utilizing layout module.
Q87) What is the requirement for sorting out playbooks asthe job, is it vital
+
Arranging playbooks as jobs , gives greater clarity andreusability to any plays , while consider an errand where MySQL establis hmentought to be done after the evacuation of Oracle DB , and another prerequis ite is expected to introduce MySQL after java establis hment, in the two cases we haveto introduce MySQL , yet without jobs need to compose playbooks independentlyfor both use cases , yet utilizing jobs once the MySQL establis hment job is madecan be used any number of times by summoning utilizing rationale in site.yaml. No, it is n’t important to make jobs for eachsituation, however making jobs is the best practice in Ansible.
Q88) What is the fundamental dis service of Dockerholders
+
As the lifetime of any compartments is while pursuing aholder is wrecked you can’t recover any information inside a compartment,the information inside a compartment is lost perpetually, however tenaciouscapacity for information inside compartments should be possible utilizingvolumes mount to an outer source like host machine and any NFS drivers.
Q89) What are the docker motor and docker form
+
Docker motor contacts the docker daemon inside the machineand makes the runtime condition and procedure for any compartment, docker makeconnects a few holders to shape as a stack utilized in making application stackslike LAMP, WAMP, XAMP
Q90) What are the Different modes does a holder can berun
+
Docker holder can be kept running in two modes Connected: Where it will be kept running in the forefront ofthe framework you are running, gives a terminal inside to compartment when– t choice is utilized with it, where each log will be diverted to stdoutscreen. is olates: This mode is typically kept running underway,where the holder is confined as a foundation procedure and each yield inside acompartment will be diverted log recordsinside/var/lib/docker/logs/ / andwhich can be seen by docker logs order.
Q91) What the yield of docker assess order will be
+
Docker examines will give yield in JSONposition, which contains subtleties like the IP address of the compartmentinside the docker virtual scaffold and volume mount data and each other dataidentified with host (or) holder explicit like the basic document driverutilized, log driver utilized. docker investigate [OPTIONS] NAME|ID [NAME|ID…]Choices Name, shorthand Default Description group, – f Format the yield utilizing the given Golayout measure, – s Dis play all out document sizes if thesort is the compartment type Return JSON for a predefined type
Q92) What is the order can be utilized to check the assetusage by docker holders
+
Docker details order can be utilized to check the assetusage of any docker holder, it gives the yield practically equivalent to Topdirection in Linux, it shapes the base for compartment asset observinginstruments like a counsel, which gets yield from docker details order. docker details [OPTIONS] [CONTAINER…] Choices Name, shorthand Default Description all, – a Show all holders (default demonstrates simplyrunning) group Pretty-print pictures utilizing a Go layout no-stream Dis able spilling details and just draw the mainoutcome no-trunc Do not truncate yield
Q93) How to execute some errand (or) play on localhost justwhile executing playbooks on various has on an ansible
+
In ansible, there is a module called delegate_to, in this module area give the specific host (or) has where your errands (or) assignmentshould be run. undertakings: name: ” Elasticsearch Hitting”
uri: url=’_searchq=status:new’headers='{“Content-type”:”application/json”}’method=GET return_content=yes
+
regis ter: yield delegate_to: 127.0.0.1
Q94) What is the dis tinction among set_fact and vars inansible
+
Where a set_fact sets the incentive for a factor at one timeand stays static, despite the fact that the esteem is very powerful and varscontinue changing according to the esteem continues changing for thevariable. assignments: set_fact: fact_time: “Truth: ” troubleshoot: var=fact_timeorder: rest 2 troubleshoot: var=fact_time assignments: name: queries in factors versus queries in realities has:localhost vars: var_time: “Var: ” Despite the fact that the query for the date has beenutilized in both the cases, wherein the vars are utilized it modifies dependenton an opportunity to time each time executed inside the playbook lifetime. Bethat as it may, Fact dependably continues as before once query is finis hed
Q95) What is a query in ansible and what are query modulesbolstered by ansible
+
Query modules enable access to information in Ansible fromoutside sources. These modules are assessed on the Ansible controlmachine andcan incorporate perusing the filesystem yet in addition reaching outsideinformation stores and adminis trations. Organization is {lookup{‘ ’,' ’}}A portion of the query modules upheld by ansible are Document pipe redis jinja layouts etcd kv store
Q96) How might you erase the docker pictures put away atyour nearby machine and how might you do it for every one of the pictureswithout a moment’s delay
+
The direction docker RMI can be utilized toerase the docker picture from nearby machine, though a few pictures may shouldbe constrained in light of the fact that the picture might be utilized by someother holder (or) another picture , to erase pictures you can utilize the mix ofdirections by docker RMI $(docker pictures – q), where docker pictures willgive the docker picture names, to get just the ID of docker pictures just , weare utilizing – q switch with docker pictures order.
Q97) What are the organizers in the Jenkins establis hmentand their employments
+
JENKINS_HOME – which will be/$JENKINS_USER/.jenkins itis the root envelope of any Jenkins establis hment and it contains subfolderseach for various purposes. employments/ – Folder contains all the data prettymuch every one of the occupations arranged in the Jenkins example. Inside employments/, you will have the envelope made foreach activity and inside those organizers, you will have fabricate organizers asindicated by each form numbers each form will have its log records, which we seein Jenkins web support. Modules/ – where all your modules will berecorded. Workspace/ – this will be available to hold all theworkspace documents like your source code pulled from SCM.
Q98) What are the approaches to design Jenkinsframework
+
Jenkins can be designed in two different ways Web: Where there is a choice called design a framework, intheir area, you can make all setup changes. Manual on filesystem: Where each change should likewis e bepossible straightforwardly on the Jenkins config.xml document under the Jenkinsestablis hment catalog, after you make changes on the filesystem, you have torestart your Jenkins, either can do it specifically from terminal (or) you canutilize Reload setup from plate under oversee Jenkins menu or you canhit/restart endpoint straightforwardly.
Q99) What is the job Of HTTP REST API in DevOps
+
As DevOps is absolutely centers around Automating yourframework and gives changes over the pipeline to various stages like an everyCI/CD pipeline will have stages like form, test, mental soundness test, UAT,Deployment to Prod condition similarly as with each phase there are diversedevices is utilized and dis tinctive innovation stack is dis played and thereshould be an approach to incorporate with various instrument for finis hing anarrangement toolchain, there comes a requirement for HTTP API , where eachapparatus speaks with various devices utilizing API , and even client canlikewis e utilize SDK to interface with various devices like BOTOX for Python tocontact AWS API’s for robotization dependent on occasions , these days itsnot cluster handling any longer , it is generally occasion drivenpipelines
Q100) What are Micro services, and how they controlproficient DevOps rehearses
+
Where In conventional engineering , each application is stone monument application implies that anything is created by a gathering ofdesigners, where it has been sent as a solitary application in numerous machinesand presented to external world utilizing load balances, where the microservices implies separating your application into little pieces, where eachpiece serves the dis tinctive capacities expected to finis h a solitary exchangeand by separating , designers can likewis e be shaped to gatherings and each bitof utilization may pursue diverse rules for proficient advancement stage, as aresult of spry improvement ought to be staged up a bit and each adminis trationutilizes REST API (or) Message lines to convey between anotheradminis tration. So manufacture and arrival of a non-strong form may notinfluence entire design, rather, some usefulness is lost, that gives theconfirmation to productive and quicker CI/CD pipelines and DevOpsPractices.
Q101) What are the manners in which that a pipeline can bemade in Jenkins
+
There are two different ways of a pipeline can be made inJenkins Scripted Pipelines: Progressively like a programming approach Explanatory pipelines: DSL approach explicitly to make Jenkins pipelines. The pipeline ought to be made in Jenkins document and thearea can either be in SCM or nearby framework. Definitive and Scripted Pipelines are developed on a verybasic level in an unexpected way. Definitive Pipeline is a later element ofJenkins Pipeline which: gives more extravagant grammatical highlights over ScriptedPipeline language structure, and is intended to make composing and perusingPipeline code less demanding.
Q102) What are the Labels in Jenkins and where it tends tobe used
+
Similarly as with CI/CD arrangement should be concentrated ,where each application in the association can be worked by a solitary CI/CDserver , so in association there might be various types of utilization likejava, c#,.NET and so forth, likewis e with microservices approach yourprogramming stack is inexactly coupled for the task , so you can have Labeled inevery hub and select the choice Only assembled employments while namecoordinating this hub, so when a manufacture is planned with the mark of the hubpresent in it, it hangs tight for next agent in that hub to be accessible,despite the fact that there are different agents in hubs.
Q103) What is the utilization of Blueocean inJenkins
+
Blue Ocean reconsiders the client experience of Jenkins.Planned from the beginning for Jenkins Pipeline, yet at the same time good withfree-form occupations, Blue Ocean diminis hes mess and builds lucidity for eachindividual from the group. It gives complex UI to recognize each phase of the pipelineand better pinpointing for is sues and extremely rich Pipeline editorial managerfor apprentices.
Q104) What is the callback modules in Ansible, give a fewinstances of some callback modules
+
Callback modules empower adding new practices to Ansiblewhen reacting to occasions. Of course, callback modules controla large portionof the yield you see when running the direction line programs, however canlikewis e be utilized to include an extra yield, coordinate with differentapparatuses and marshall the occasions to a capacity backend. So at whateverpoint a play is executed and after it creates a few occasions, that occasionsare imprinted onto Stdout screen, so callback module can be put into anycapacity backend for log preparing. Model callback modules are ansible-logstash, where eachplaybook execution is brought by logstash in the JSON group and can beincorporated some other backend source like elasticsearch.
Q105) What are the scripting dialects can be utilized inDevOps
+
As with scripting dialects, the fundamental shell scriptingis utilized to construct ventures in Jenkins pipelines and python contents canbe utilized with some other devices like Ansible , terraform as a wrappercontent for some other complex choice unraveling undertakings in anymechanization as python is more unrivaled in complex rationale deduction thanshell contents and ruby contents can likewis e be utilized as fabricate venturesin Jenkins.
Q106) What is Continuous Monitoring and why checking is basic in DevOps
+
DevOps draws out each association capacity of fabricate anddis charge cycle to be a lot shorter with an idea of CI/CD, where each change is reflected into generation conditions fastly, so it should be firmly observed toget client input. So the idea of constant checking has been utilized to assessevery application execution progressively (at any rate Near Real Time) , whereevery application is produced with application execution screen specialis tsperfect and the granular dimension of measurements are taken out like JVMdetails and even practical savvy measurements inside the application canlikewis e be spilled out progressively to Agents , which thusly provides for anybackend stockpiling and that can be utilized by observing groups in dashboardsand cautions to get persis tently screen the application.
Q107) Give a few instances of persis tent observinginstruments
+
Where numerous persis tent observing instruments areaccessible in the market, where utilized for an alternate sort of use andsending model Docker compartments can be checked by consultant operator,which can be utilized by Elasticsearch to store measurements (or) you canutilize TICK stack (Telegraph, influxdb, Chronograph, Capacitor) for eachframework observing in NRT(Near Real Time) and You can utilize Logstash (or)Beats to gather Logs from framework , which thusly can utilize Elasticsearch asStorage Backend can utilize Kibana (or) Grafana as vis ualizer. The framework observing should be possible by Nagios andIcinga.
Q108) What is docker swarm
+
Gathering of Virtual machines with Docker Engine can begrouped and kept up as a solitary framework and the assets likewis e being sharedby the compartments and docker swarm ace calendars the docker holder in any ofthe machines under the bunch as indicated by asset accessibility Docker swarm init can be utilized to start docker swarmbunch and docker swarm joins with the ace IP from customer joins the hub intothe swarm group.
Q109) What are Microservices, and how they controlproductive DevOps rehearses
+
Where In conventional engineering , each application is stone monument application implies that anything is created by a gathering ofdesigners, where it has been conveyed as a solitary application in numerousmachines and presented to external world utilizing load balancers, where themicroservices implies separating your application into little pieces, where eachpiece serves the diverse capacities expected to finis h a solitary exchange andby separating , engineers can likewis e be shaped to gatherings and each bit ofutilization may pursue dis tinctive rules for proficient advancement stage, onaccount of light-footed improvement ought to be staged up a bit and eachadminis tration utilizes REST API (or) Message lines to impart between anotheradminis tration. So manufacture and arrival of a non-hearty variant may notinfluence entire design, rather, some usefulness is lost, that gives theaffirmation to proficient and quicker CI/CD pipelines and DevOpsPractices.
Q110) What are the manners in which that a pipeline can bemade in Jenkins
+
There are two different ways of a pipeline can be made inJenkins Scripted Pipelines: Progressively like a programming approach Explanatorypipelines: DSL approach explicitly to make Jenkins pipelines. The pipeline ought to be made in Jenkins record and the areacan either be in SCM or neighborhood framework. Definitive and Scripted Pipelines are developed in a generalsense in an unexpected way. Explanatory Pipeline is a later element of JenkinsPipeline which: gives more extravagant linguis tic highlights over ScriptedPipeline sentence structure, and is intended to make composing and perusingPipeline code simpler.
Q111) What are the Labels in Jenkins and where it very wellmay be used
+
Likewis e with CI/CD arrangement should be incorporated ,where each application in the association can be worked by a solitary CI/CDserver , so in association there might be various types of use like java,c#,.NET and so forth, similarly as with microservices approach your programmingstack is inexactly coupled for the undertaking , so you can have Labeled inevery hub and select the alternative Only assembled occupations while markcoordinating this hub, so when a fabricate is booked with the name of the hubpresent in it, it sits tight for next agent in that hub to be accessible,despite the fact that there are different agents in hubs.
Q112) What is the utilization of Blueocean inJenkins
+
Blue Ocean reexamines the client experience of Jenkins.Planned starting from the earliest stage for Jenkins Pipeline, yet at the sametime good with free-form occupations, Blue Ocean lessens mess and expandsclearness for each individual from the group. It gives modern UI to recognize each phase of the pipelineand better pinpointing for is sues and rich Pipeline proofreader forfledglings.
Q113) What is the callback modules in ansible, give a fewinstances of some callback modules
+
Callback modules empower adding new practices to Ansiblewhen reacting to occasions. As a matter of course, callback modules controlthegreater part of the yield you see when running the direction line programs, yetcan likewis e be utilized to include an extra yield, coordinate with differentinstruments and marshall the occasions to a capacity backend. So at whateverpoint a play is executed and after it delivers a few occasions, that occasionsare imprinted onto Stdout screen, so callback module can be put into anycapacity backend for log handling. Precedent callback modules are ansible-logstash, where eachplaybook execution is gotten by logstash in the JSON position and can beincorporated some other backend source like elasticsearch.
Q114) What are the scripting dialects can be utilized inDevOps
+
As with scripting dialects, the fundamental shell scriptingis utilized to assemble ventures in Jenkins pipelines and python contents can beutilized with some other instruments like Ansible.
Q115) For what reason is each instrument in DevOps is generally has some DSL (Domain Specific Language)
+
Devops is a culture created to address the necessities oflithe procedure, where the advancement rate is quicker ,so sending shouldcoordinate its speed and that needs activities group to arrange and work withdev group, where everything can computerize utilizing content based , however itfeels more like tasks group than , it gives chaotic association of anypipelines, more the utilization cases , more the contents should be composed ,so there are a few use cases, which will be sufficient to cover the requirementsof light-footed are taken and apparatuses are made by that and customization canoccur over the device utilizing DSL to mechanize the DevOps practice and Infrathe board.
Q116) What are the mis ts can be incorporated with Jenkinsand what are the utilization cases
+
Jenkins can be coordinated with various cloud suppliers forvarious use cases like dynamic Jenkins slaves, Deploy to cloudconditions. A portion of the cloud can be incorporated are AWS Purplis h blue Google Cloud OpenStack
Q117) What are Docker volumes and what sort of volume oughtto be utilized to accomplis h relentless capacity
+
Docker volumes are the filesystem mount focuses made byclient for a compartment or a volume can be utilized by numerous holders, andthere are dis tinctive sorts of volume mount accessible void dir, Post mount, AWSupheld lbs volume, Azure volume, Google Cloud (or) even NFS, CIFS filesystems,so a volume ought to be mounted to any of the outer drives to accomplis hdetermined capacity, in light of the fact that a lifetime of records insidecompartment, is as yet the holder is available and if holder is erased, theinformation would be lost.
Q118) What are the Artifacts store can be incorporated withJenkins
+
Any sort of Artifacts vault can be coordinated with Jenkins,utilizing either shell directions (or) devoted modules, some of them are Nexus,Jfrog.
Q119) What are a portion of the testing apparatuses thatcan be coordinated with Jenkins and notice their modules
+
Sonar module – can be utilized to incorporate testingof Code quality in your source code. Execution module – this can beutilized to incorporate JMeter execution testing. Junit – to dis tribute unit test reports. Selenium module – can be utilized to incorporate withselenium for computerization testing.
Q120) What are the manufacture triggers accessible inJenkins
+
Fabricates can be run physically (or) either can naturallybe activated by various sources like Webhooks- The webhooks are API calls from SCM, at whateverpoint a code is submitted into a vault (or) should be possible for explicitoccasions into explicit branches. Gerrit code survey trigger-Gerrit is an opensource codeaudit instrument, at whatever point a code change is endorsed after auditconstruct can be activated. Trigger Build Remotely – You can have remote contentsin any machine (or) even AWS lambda capacities (or) make a post demand totrigger forms in Jenkins. Calendar Jobs-Jobs can likewis e be booked like Cronoccupations. Survey SCM for changes – Where your Jenkins searchesfor any progressions in SCM for the given interim, if there is a change, amanufacture can be activated. Upstream and Downstream Jobs-Where a construct can beactivated by another activity that is executed already.
Q121) How to Version controlDocker pictures
+
Docker pictures can be form controlled utilizing Tags, whereyou can relegate the tag to any picture utilizing docker tag order. Furthermore, on the off chance that you are pushing any docker centerlibrary without labeling the default label would be doled out which is mostrecent, regardless of whether a picture with the most recent is available, itindicates that picture without the tag and reassign that to the most recent pushpicture.
Q122) What is the utilization of Timestamper module inJenkins
+
It adds Timestamp to each line to the comfort yield of theassemble.
Q123) Why you ought not execute an expand on ace
+
You can run an expand on ace in Jenkins , yet it is n’tprudent, in light of the fact that the ace as of now has the duty of planningassembles and getting incorporate yields with JENKINS_HOME index, so on the offchance that we run an expand on Jenkins ace, at that point it furthermore needsto manufacture apparatuses, and workspace for source code, so it puts executionover-burden in the framework, if the Jenkins ace accidents, it expands thedowntime of your fabricate and dis charge cycle.
Q124) What do the main benefits of DevOps
+
With a single team composed of cross-functional commentssimply working in collaboration, DevOps organizations container produceincluding maximum speed, functionality, including innovation. Where continuespecial benefits: Continuous software control. Shorter complexity tomanage.
Q125) What are the uses of DevOps tools
+
Gradle. Your DevOps device stack will need a reliable buildtool. Git. Git is one from the most successful DevOps tools,widely applied across the specific software industry. Jenkins. Jenkins is thatgo-to DevOps automation toolfor many software community teams. Bamboo. Docker. Kubernetes. Puppet Enterpris e. Ansible.
Q126) What is DevOps beginner
+
DevOps is a society which supports collaboration betweenDevelopment including Operations Team to deploy key to increase faster in anautomated & repeatable way. In innocent words, DevOps backside is establis hed as an association of development and IT operations includingexcellent communication and collaboration.
Q127) What is the roles and responsibilities of the DevOpsengineer
+
DevOps Engineer manages with developers including the ITsystem to manage the code releases. They are both developers cases becomeinterested in deployment including practice settings or sysadmins who convert apassion for scripting and coding more move toward the development front whereall can improve that planning from test and deployment.
Q128) Which is the top DevOps toolsand it’s Whichtools have you worked on
+
Dis cover about the trending Top DevOps Tools including Git.Well, if you live considering DevOps being a toolwhen, you are wrong! DevOpsdoes not a toolor software, it’s an appreciation that you can adopt forcontinuous growth. file and, by practicing it you can simply coordinate this work among your team.
Q129) Explain the typical characters involved inDevOps
+
Commitment to the superior level in the organization. Needfor silver to be delivered across the organization. Version checksoftware. Automated tools to compliance to process. AutomatedTesting Automated Deployment
Q130) What are your expectations from a career perspectiveof DevOps
+
To be involved in the end to end delivery method and themost important phase of helping to change the manner so as to allow thatdevelopment and operations teams to go together also understand eachother’s point of view.
Q131) What does configuration management under terms likeinfrastructure further review some popular tools used
+
In Software Engineering Software Configuration Management is a unique task about tracking to make the setting configuration during theinfrastructure with one change. It is done for deploying, configuring andmaintaining servers.
Q132) How will you approach when each design must toimplement DevOps
+
As the application is generated and deployed, we do need tocontrolits performance. Monitoring means also really important because it mightfurther to uncover some defects which might not have been detectedearlier. Q133) Explain about from Continuous Testing From the above goal of Continuous Integration which is totake this application excuse to close users are primarily providing continuousdelivery. This backside is completed out any adequate number about unit testingand automation testing. Hence, we must validate that this system created andintegrated with all the developers that work as required. Q134) Explain about from Continuous Delivery. Continuous Delivery means an extension of ConstantIntegration which primarily serves to make the features which some developerscontinue developing out on some end users because soon as possible. During this process, it passes through several stages of QA, Staging etc., and before fordelivery to the PRODUCTION system.
Q135) What are the tasks also responsibilities of DevOpsengineer
+
In this role, you’ll work collaboratively includingsoftware engineering to use and operate our systems. Help automate alsostreamline our procedures and processes. Build also maintain tools fordeployment, monitoring, including operations. And troubleshoot and resolveproblems in our dev, search and production environments.
Q136) What is defined DevOps engineer should know
+
DevOps Engineer goes including developers and that IT staffto manage this code releases. They live both developers who become involvedthrough deployment including web services or sysadmins that become a passion forscripting and coding more move into the development design where only candevelop this planning from search also deployment.
Q137) How much makes any DevOps engineer make
+
A lead DevOps engineer can get between $137,000 including$180,000, according to April 2018 job data of Glassdoor. The common salary fromany lead DevOps engineer based at the Big Apple is $141,452.
Q138) What mean the specific skills required for a DevOpsengineer
+
While tech abilities are a must, strong DevOps engineersfurther possess this ability to collaborate, multi- task, also always place thatcustomer first. critical skills that all DevOps engineer requirements forsuccess.
Q139) What is DevOps also why is it important
+
Implementing the new approach would take in many advantageson an organization. A seamless collection up can be performed in the teams ofdevelopers, test managers, and operational executives also hence they can workin collaboration including each other to achieve a greater output on aproject.
Q140) What is means by DevOps lifecycle
+
DevOps means an agile connection between developmentincluding operations. It means any process followed by this development becausewell because of help drivers clean of this starting of this design to productionsupport. Understanding DevOps means incomplete excuse estimated DevOpslifecycle. Tools for an efficient DevOps workflow. A daily workflowbased at DevOps thoughts allows team members to achieve content faster, beflexible just to both experiments also deliver value, also help each part fromthis organization use a learning mentality.
Q142) Can you make DevOps without agile
+
DevOps is one about some key elements to assis t you toachieve this . Can you do agile software evolution without doing DevOps Butmanaging agile software development and being agile are a couple reallydifferent things.
Q143) What exactly defined is DevOps
+
DevOps is all of bringing commonly the structure alsoprocess of traditional operations, so being support deployment, including anytools, also practices of traditional construction methods so as source controlalso versioning. Q144) Need for Continuous Integration: Improves the quality of software. Reduction in time taken todelivery Allows dev team to detect and locate problems early Q145) Success factor for the Continuous Integration Maintain Code Repository Automate the build Perform daily checkin and commits to baseline Test in cloneenvironment Keep the build fast Make it easy to get the newest deliverables
Q146) Can we copy Jenkins job from one server to otherserver
+
Yes, we can do that using one of the following ways We can copy the Jenkins jobs from one server to other serverby copying the corresponding jobs folder. We can make a copy of the exis ting jobby making clone of a job directory with different names Rename the exis ting jobby renaming the directory
Q147) How can we create the backup and copy inJenkins
+
We can copy or backup, we need to backup JENKINS_HOMEdirectory which contains the details of all the job configurations, builddetails etc. Q148) Difference between “poll scm” and“build periodically” Poll SCM will trigger the build only if it detects thechange in SCM, whereas Build Periodically will trigger the build once the giventime period is elapsed.
Q149) What is difference between docker image and dockercontainer
+
Docker image is a readonly template that contains theinstructions for a container to start. Docker container is a runnable instanceof a docker image
Q150) What is Application Containerization
+
It is a process of OS Level virtualization technique used todeploy the application without launching the entire VM for each applicationwhere multiple is olated applications or services can access the same Host andrun on the same OS. Q151) syntax for building docker image docker build –f -timagename:version Q152) running docker image docker run –dt –restart=always –p : -h -v : imagename:version Q153) How to log into a container docker exec –it /bin/bash
Q154) What is Puppet
+
Puppet is a Configuration Management tool, Puppet is used toautomate adminis tration tasks.
Q155) What is Configuration Management
+
Configuration Management is the System engineering process.Configuration Management applied over the life cycle of a system providesvis ibility and controlof its performance, functional, and physicalattributesrecording their status and in support of Change Management. Q156) Lis t the Software Configuration ManagementFeatures. Enforcement Cooperating Enablement Version ControlFriendly Enable Change ControlProcesses Q157) Lis t out the 5 Best Software Configuration ManagementTools. CFEngine Configuration Tool. CHEF Configuration ToolAnsibleConfiguration ToolPuppet Configuration Tool. SALTSTACK Configuration Tool.
Q158) Why should Puppet be chosen
+
It has good community support Easy to Learn Programming Language DSL It is opensource
Q159) What is Saltstack
+
SaltStack is based on Python programming & Scripitinglanguage. Its also a configuration tool.Saltstack works on a non-centralizedmodel or a master-client setup model. it provides a push and SSH methods tocommunicate with clients.
Q160) Why should Puppet to be chosen
+
There are Some Reason puppet to be chosen. Puppet is opensource Easy to Learn Programming Language DSL Puppet has goodcommunity support Q161) Advantages of VCS Multiple people can work on the same project and it helps usto keep track of the files and documents and their changes. We can merge the changes from multiple developers to singlestream. Helps us to revert to the earlier version if the current version is broke. Helps us to maintain multiple version of the software at thesame location without rewriting. Q162) Advantages of DevOps Below are the major advantages Technical: Continuous software delivery Less Complexity Faster Resolution Business: Faster delivery of the features More stable operatingenvironment Improved communication and collaboration between variousteams Q163) Use cases where we can use DevOps Explain the legacy / old procedures that are followed todevelop and deploy software Problems of that approach How can we solve the above is sues using DevOps. For the 1 st and2 nd points, development of theapplication, problems in build and deployment, problems in operations, problemsin debugging and fixing the is sues For 3 rd pointexplain various technologies we can use to ease the deployments, fordevelopment, explain about taking small features and development, how it helpsfor testing and is sue fixing. Q164) Major difference between Agile and DevOps Agile is the set of rules/principles and guidelines abouthow to develop a software. There are chances that this developed software worksonly on developer’s environment. But to release that software to publicconsumption and deploy in production environment, we will use the DevOps toolsand Techniques for the operation of that software. In a nutshell, Agile is the set of rules for the developmentof a software, but DevOps focus more on Development as well as Operation of theDeveloped software in various environments.
Q165) What Are the Benefits Of Nosql
+
Non-relationals and schema-less data models Low latency andhigh performance Highly scalable
Q166) What Are Adoptions Of Devops In Industry
+
Use of the agile and other development processes andmethods. Demand for an increased rate of the production releases fromapplication and business. Wide availability of virtuals and cloud infrastructurefrom both internal and external providers; Increased usage of the data center,automation and configuration management tools; Increased focus on the testautomation and continuous integration methods; Best practices on the critical is sues.
Q167) How is the Chef Used As a Cm Tool
+
Chef is the considered to be one of the preferredindustry-wide CM tools. Facebook migrated its an infrastructure and backend ITto the Chef platform, for example. Explain how to the Chef helps you to avoiddelays by automating processes. The scripts are written in Ruby. It canintegrate with a cloud-based platforms and configure new systems. It providesmany libraries for the infrastructure development that can later to be deployedwithin a software. Thanks to its centralized management system, one of the Chefserver is enough to be used as the center for deploying various policies.
Q168) Why Are the Configuration Management Processes AndTools Important
+
Talk about to multiple software builds, releases, revis ions,and versions for each other software or testware that is being developed. Moveon to explain the need for storing and maintaining data, keeping track of thedevelopment builds and simplified troubleshooting. Don’t forget to mentionthat key CM tools that can be used to the achieve these objectives. Talk abouthow to tools like Puppet, Ansible, and Chef help in automating softwaredeployment and configuration on several servers.
Q169) Which Are the Some Of the Most Popular Devops Tools The most popular DevOps tools included`
+
Selenium Puppet Chef Git Jenkins Ansible
Q170) What Are the Vagrant And Its Uses
+
Vagrant used to virtual box as the hypervis or for virtualenvironments and in current scenario it is also supporting the KVM. Kernel-basedVirtual Machine. Vagrant is a toolthat can created and managed environmentsfor the testing and developing software. Devops Training Free Demo
Q171) How to Devops is Helpful To Developers
+
To fix the bug and implements new features of the quickly.It provides to the clarity of communications among team members.
Q172) Name of The Popular Scripting Language Of the Devops
+
Python
Q173) Lis t of The Agile Methodology Of the Devops
+
DevOps is a process Agile is the same as DevOps. Separate group are framed. Itis problem solving. Developers managing production DevOps is the development-driven release management
Q174) Which Are The Areas of Devops Are Implemented
+
Production Development Creation of the productions feedback and its development ITOperations development
Q175) The Scope For SSH
+
SSH is a Secure Shell which provides users with a secure,encrypted mechanis m to log into systems and transfer files. To log out the remote machine and worked on commandline. To secure encrypted of the communications between two hostsover an insecure network.
Q176) What Are The Advantages Of Devops With Respect To theTechnical And Business Perspective
+
Technical benefits Software delivery is continuous. Reduces Complexity inproblems. Faster approach to resolve problems Manpower is reduced. Business benefits High rate of delivering its features Stable operating environments More time gained to Addvalues. Enabling faster feature time to market
Q177) What Are The Core Operations Of the Devops In TermsOf the Development And Infrastructure
+
The core operations of DevOps Application development Code developing Code coverage Unit testing Packaging Deployment With infrastructure Provis ioning Configuration Orchestration Deployment
Q178) What Are The Anti-patterns Of Devops
+
A pattern is common usage usually followed. If a pattern ofthecommonly adopted by others does not work for your organization and youcontinue to blindly follow it, you are essentially adopting an anti-pattern.There are myths about DevOps. Some of them include DevOps is a process Agile equalsDevOps We need a separate DevOps group Devops will solve all ourproblems DevOps means Developers Managing Production DevOps is Development-driven release management DevOps is not development driven. DevOps is not IT Operations driven. We can’t do DevOps– We’re Unique We can’t do DevOps – We’re got the wrongpeople
Q179) What are The Most Important Thing Devops Helps UsAchieve
+
The most important thing that the DevOps helps us achieve is to get the changes into production as quickly as possible while that minimizingris ks in software quality assurance and compliance. This is the primaryobjective of DevOps. For example clear communication and better workingrelationships between teams i.e. both of the Ops team and Dev team collaboratetogether to deliver good quality software which in turn leads to higher customersatis faction.
Q180) How Can Make a Sure New Service is Ready For TheProducts Launched
+
Backup System Recovery plans Load Balancing Monitoring Centralizedlogging
Q181) How to All These Tools Work for Together
+
Given below is a generic logical of the flow whereeverything gets are automated for seamless delivery. However, its flow may varyfrom organization to the organization as per the requirement. Developers develop the code and this source code is managedby Version ControlSystem of the tools like Git etc. Developers send to this code of the Git repository and anychanges made in the code is committed to this Repository. Jenkins pulls this code from the repository using the Gitplugin and build it using tools like Ant or Maven. Configuration managements tools like puppet deploys &provis ions testing environment and then Jenkins releases this code on the testto environment on which testing is done using tools like selenium. Once the code are tested, Jenkins send it for the deploymenton production to the server (even production server are provis ioned &maintained by tools like puppet). After deployment Its continuously monitored by tools likeNagios. Docker containers provides testing environment to the testthe build features.
Q182) Which Are The Top Devops Tools
+
The most popular DevOps tools are mentioned below GitVersion ControlSystem tool Jenkins Continuous Integration toolSelenium ContinuousTesting tool Puppet, Chef, Ansible are Configuration Management andDeployment tools Nagios Continuous Monitoring tool Docker Containerization tool
Q183) How to Devops Different From the Agile / Sdlc
+
Agile are the set of the values and principles about how toproduce i.e. develop software. Example if you have some ideas and you want to the turnthose ideas into the working software, you can use the Agile values areprinciples as a way to do that. But, that software might only be working on developer’s laptop or in a test environment. Youwant a way to quickly, easily and repeatably move that software into theproduction infrastructure, in a safe and simple way. To do that you needs areDevOps tools and techniques. You can summarize by saying Agile of the softwaredevelopment methodology focuses on the development for software but DevOps onthe other hand is responsible for the development as well as deployment of thesoftware to the safest and most reliable way to the possible. Here’s ablog that will give you more information of the evolutions of the DevOps.
Q184) What is The Need For Devops
+
According to me, this should start by explaining the generalmarket trend. Instead of the releasing big sets of the features, companies aretrying to see if small features can be transported to their customers through aseries of the release trains. This have many advantages like quick feedback fromthe customers, better quality of the software etc. which in turn leads to thehigh customer satis faction. To achieve this , companies are required to Increase deployment frequency Lower failure rate of newreleases Shortened lead time between fixes Faster mean time to recovery of the event of new releasecrashing
Q185) What is meant by Continuous Integration
+
It’s the development practice that requires developersto integrate code into a shared repository several times a day. Each check-inthen verified by an automated build, allowing teams to the detect problemsearly. Q186) Mention some of the useful plugins in Jenkins. Below, I have mentioned some important are Plugins: Maven 2 project Amazon EC2 HTML publis her Copy artifactJoin Green Balls
Q187) What is Version control
+
Its the system that records changes are the file or set ofthe files over time so that you can recall specific versions later.
Q188) What are the uses of Version control
+
Revert files back to a previous state. Revert to the entireproject back to a previous state. Compare changes over time. See who last modified the something that might to be causinga problem. Who introduced an is sue and when.
Q189) What are the containers
+
Containers are the of lightweight virtualization, heavierthan ‘chroot’ but lighter than ‘hypervis ors’. Theyprovide is olation among processes
Q190) What is meant by Continuous Integration
+
It is a development practice that requires are developers tointegrate code into the shared repository several times a day.
Q191) What’s a PTR in DNS
+
Pointer (PTR) record to used for the revers DNS (Domain NameSystem) lookup.
Q192) What testing is necessary to insure a new service is ready for production
+
Continuous testing
Q193) What is Continuous Testing
+
It is the process of executing on tests as part of thesoftware delivery pipelines to obtain can immediate for feedback is the businessof the ris ks associated with in the latest build.
Q194) What is Automation Testing
+
Automation testing or Test Automation is a process of theautomating that manual process to test the application/system under test.
Q195) What are the key elements of continuoustesting
+
Ris k assessments, policy analysis , requirementstraceabilities, advanced analysis , test optimis ation, and servicevirtualis ations
Q196) What are the Testing types supported bySelenium
+
Regression testing and functional testing Also Read>> Top Selenium Interview Questions &Answers
Q197) What is Puppet
+
It is a Configuration Management toolwhich is used to theautomate adminis tration of the tasks.
Q198) How does HTTP work
+
The HTTP protocolare works in a client and server modellike most other protocols. A web browser using which a request is initiated is called as a client and a web servers software which are the responds to thatrequest is called a server. World Wide Web Consortium of the InternetEngineering Task Force are two importants spokes are the standardization of theHTTP protocol.
Q199) Describe two-factor authentication
+
Two-factors authentication are the security process in whichthe user to provides two means of the identification from separate categories ofcredentials.
Q200) What is git add
+
adds the file changes to the staging area
Q201) What is git commit
+
Commits the changes to the HEAD (staging area)
Q202) What is git push
+
Sends the changes to the remote repository
Q203) What is git checkout
+
Switch branch or restore working files
Q204) What is git branch
+
Creates a branch
Q205) What is git fetch
+
Fetch the latest his tory from the remote server and updatesthe local repo
Q206) What is git merge
+
Joins two or more branches together
Q207) What is git pull
+
Fetch from and integrate with another repository or a localbranch (git fetch + git merge)
Q208) What is git rebase
+
Process of moving or combining a sequence of commits to anew base commit
Q209) What is git revert
+
To revert a commit that has already been publis hed and madepublic
Q210 What is git clone
+
Ans: clones the git repository and creates a working copy inthe local machine
Q211) What is the difference between the Annie Playbookbook and the characters
+
Roles The characters are a restructured entity of a play. Playsare on playbooks. A set of functions to accomplis h the specific role. Mapsbetween hosts and roles. Example: Common, Winners. Example: site.yml,fooservers.yml, webservers.yml.
Q212) How do I see all the ansible_ variables lis t
+
By naturally collecting “facts” about themachines, these facts can be accessed in Playbooks and in templates. To see alis t of all the facts about a computer, you can run a “setup” blockas an ad hoc activity: Ansible -m system hostname It will print a dictionary of all the facts available forthat particular host.
Q213) What is Doctor
+
Docax is a container technology that connects yourapplication and all its functions into the form of containers to ensure that youare running uninterrupted in any situation of your use.
Q214) What is the Tagore film
+
Tucker is the source of the dagger container. Or in otherwords, dagger pictures are used to create containers.
Q215) What is the tooger container
+
Dogger Container is a phenomenon of the film.
Q216) Do we consider Dev Devils as a smart way
+
Of course, we !! The only difference between dynamicalgorithms and DevObs is that the dynamic process is implemented for thedevelopment section and activates both DevOps development andfunctionality.
Q217) What are the benefits of using Git
+
Data personality and copy Get high only one. A directory directory in the repository High dis kusage and network performance Joint friendship Git can use any kind of projects.
Q218) What is kernel
+
A kernel, the software that can easily change the hardwareinterfaces of your computer.
Q219) What is the difference between grep -i and grep-v
+
I accept this value L) ls | grep -i docker Dockerfile docker.tar.gz ls | grep -v docker Desktop Dockerfile Documents Downloads You can not find anything with name docker.tar.gz Q220) You can define a specific location for thefile This feature is generally used to give the server areplacement location. Let me tell you on the computer below and I want to create1GB swap space, dd if = / dev / zero = = / swapfile1 bs = 1G count =1
Q221) What is the concept of sudo in Linux
+
Pseudo is an application for Unix-and Linux-based systemsthat provide the ability to allow specific users to use specific system commandsin the system’s root level.
Q222) What is Jenkins pipe
+
Jenkins pipeline (or simply “tube”) is anadditional package that supports and activates continuous delivery tube inJenkins.
Q223) How to stop and restart the toxin container
+
Stop container: stop container container ID Reboot the Tucker Container now: Docer Re-containerID
Q224) Which sites are running by Tagore
+
Docax is running on Linux and Cloud platforms only: Ubuntu 12.04 LTS + Fedora 20+ RHEL 6.5+ CentOS 6+ Gentoo ArchLinux openSUSE 12.3+ CRUX 3.0+ Cloud: Amazon EC2 Google Compute Engine Microsoft Asur Rackspace Since support is not supported, do not work on Windows orMac for token production, yes, even on windows you can use it for testingpurposes
Q225) What are the tools used for taxi networking
+
We usually use karfs and taxi bear to do taxinetworking.
Q226) What does Tucker write
+
You would like to have a number of taxiers containers, andat that time you need to create a file that creates a docer and type the commandto make a taxi-up. It runs all containers mentioned in the docer composefile.
Q227) What is a scrum
+
Using scrime based on your complex software and productdevelopment task as small particles, it uses reboots and additional procedures.Each replay is two weeks. Scrum has three characters: product owner, scrummaster and team
Q228) Purpose for SSH
+
SSH is a secure shell that allows users to login to asecure, encrypted mechanis m into computers and transmitting files.Exit theremote machine and work on the command line. Protect encrypted communications between the two hosts on anunsafe network.
Q229) Are DevOps implemented
+
Product development Creating product feedback and its development IT ActivitiesDevelopment.
Q230) Do you want to lis t the active modes ofDevOps
+
DevOps is a process Like the active DevOps. A separate group is configured. This will solve theproblem. Manufacturers manufacturing production DevOps is a development-driven output management
Q231) Do you lis t the main difference between active andDevOffice
+
Agile: There is something about dynamic software developmentDevops: DevOps is about software deployment and management. DevOps does not replace the active or lean. By removingwaste, by removing gloves and improving regulations, it allows the production ofrapid and continuous products.
Q232) For the popular scripting language of DevOps
+
Python
Q233) How does DevOps help developers
+
To correct the defect and immediately make innovativeattributes. This is the accuracy of the coordination between the membersof the group.
Q234) What is Vegand and its Uses
+
Virtual virtual box has been used as a hyperversion forvirtual environments and in the current scenario it supports KVM. Kernel-basedvirtual machine Vegant is a toolfor creating and managing the environmentfor making software and experiments. Tutorials Tutorial Free Demo
Q235) What is the main difference between Linux and Unixoperating systems
+
Unix: It belongs to the multitasking, multiuser operating systemfamily. These are often used on web servers and workstations. It was originally derived from AT & T Unix, which wasstarted by the Bell Labs Research Center in the 1970s by Ken Thompson, Dennis Ritchie, and many others. Operating systems are both open source, but the comparis onis relatively similar to Unix Linux. Linux: Linux may be familiar to each programming language. Thesepersonal computers are used. The Unix operating system is based on the kernel.
Q236) How can we ensure how to prepare a new service forthe products launched
+
Backup system Recovery plans Load balance TrackingCentralized record
Q237) What is the benefit of NoSQL
+
Independent and schema-less data model Low latency and highperformance Very scalable Q238) What is the adoption of Devokos in theprofession 1. Use of active and other developmental processes andmethods. An increased ratio of production output is required from useand business. Virtual and Cloud Infrastructure Transfers from Internal andOutdoor Providers; Increased use of data center, automation and configurationmanagement tools; Focusing on testing automation and serial coordinationsystems; Best Practices in Important Problems
Q239) What are the benefits of NoSQL database onRDBMS
+
Benefits: ETL is very low Support for structured text is provided Changes in periodsare handled Key Objectives Function. The ability to measure horizontally Many data structures areprovided. Vendors may be selected. Q240) The first 10 capabilities of a person in the positionof DevOp should be. The best in system adminis tration Virtualizationexperience Good technical skills Great script Good development skills Chef in the automation toolexperience Peoplemanagement Customer service Real-time cloud movements Who’s worried aboutwho
Q241) What is PTR in DNS
+
The PNS (PTR) regis tration is used to turn the search DNS(Domain Name System).
Q242) What do you know about DevOps
+
Your answer should be simple and straightforward. Start byexplaining the growing importance of DevOps in information technology.Considering that the efforts of the developments and activities to acceleratethe delivery of software products should be integrated, the minimum failurerate. DevOps is a value-practical procedure in which the design and performanceengineers are able to capture the product level or service life cycle across thedesign, from design and to the design level
Q243) Why was Dev’s so important in the past fewyears
+
Before dis cussing the growing reputation of DevOps, dis cussthe current industry scenario. The big players like Netflix and Facebook beginwith some examples of how this business can help to develop and use unwantedapplications. Facebook’s continuous use and coding license models, and howto measure it, while using Facebook to ensure the quality of the experience.Hundreds of lines are implemented without affecting ranking, stability andsecurity. Dipops Training Course Your next application must be Netflix. This streaming andon-the-video video company follows similar procedures with complete automatedprocesses and systems. Specify user base of these two companies: Facebook has 2billion users, Netflix provides online content for more than 100 million usersworldwide. Reduced lead time between the best examples of bugs, bugfixes, runtime and continuous supplies and the overall reduction of humancosts.
Q244) What are some of the most popular DevOpstools
+
The most popular DevOps tools include: Selenium Puppet Chef Git information Jenkins Ansible Tucker Tipps Online Training
Q245) What is Version Control, and why should VCSuse
+
Define the controlbar and talk about any changes to one ormore files and store them in a centralized repository. VCS Tools remembersprevious versions and helps to: Make sure you do not go through changes over time. Turn on specific files or specific projects to the olderversion. Explore the problems or errors of a particular change. Using VCS, developers provide flexibility to worksimultaneously on a particular file, and all changes are logicallyconnected.
Q246) is There a Difference Between Active and DevOpsIfyes, please explain
+
As a DevOps Engineer, interview questions like this are verymuch expected. Start by explaining the clear overlap between DevOps and Agile.Although the function of DevOps is always synonymous with dynamic algorithms,there is a clear difference between the two. Agile theories are related to thesoft product or development of the software. On the other hand, DevOps is handled with development, ensuring quick turnaround times, minimal errors andreliability by installing the software continuously.
Q247) Why are structural management processes and toolsimportant
+
Talk about many software developments, releases, edits andversions for each software or testware. Describe the need for data storage andmaintenance, development of developments and tracking errors easily. Do notforget to mention key CM tools that can be used to achieve these goals. Talkabout how the tools, such as buffet, aseat, and chef are useful in automatingsoftware deployment and configuration on multiple servers.
Q248) How is the chef used as a CM tool
+
Chef is considered one of the preferred professional CMTools. Facebook has changed its infrastructure and the Shef platform keeps trackof IT, for example. Explain how the chef helps to avoid delays by automatingprocesses. The scripts are written in ruby. It can be integrated intocloud-based platforms and configures new settings. It provides many librariesfor infrastructure development, which will then be installed in a software.Thanks to its centralized management system, a chef server is sufficient to usevarious policies as the center of ordering.
Q249) How do you explain the concept of“Infrastructure Index” (IAC)
+
This is a good idea to talk about IAC as a concept,sometimes referred to as a programming program, where the infrastructure is similar to any other code. The traditional approach to managing infrastructureis how to take a back seat and how to handle manual structures, unusual toolsand custom scripts Q250) Lis t the essential DevOps tools. Git information Jenkins Selenium Puppet Chef Ansible Nagios LaborerMonit El-Elis torsch, Lestastash, Gibbon Collectd / Collect Git Information (Gitwidia)
Q251) What are the main characters of DevOps engineersbased on growth and infrastructure
+
DevOps Engineer’s major work roles ApplicationDevelopment Developing code Code coverage Unit testing Packaging Preparing with infrastructure Continuous integrationContinuous test Continuous sorting Provis ioning Configuration OrchestrationDeployment
Q252) What are the advantages of DevOps regarding technicaland business perspective
+
Technical Advantages: Software delivery continues. Problems reduce austerity. Fast approach to solving problems Humans are falling. Business Benefits: The higher the rate for its features Fixed operatingsystems It took too long to add values. Run fast time for themarket Learn more about DevOps benefits from this informationblog.
Q253) Purpose for SSH
+
SSH is a secure shell that allows users to login to asecure, encrypted mechanis m into computers and transmitting files. Exit the remote machine and work on the command line. Protect encrypted communications between the two hosts on anunsafe network.
Q254) Which part of DevOps is implemented
+
Product development Creating product feedback and its development IT ActivitiesDevelopment Q255) Lis t the DevOps’s active algorithm. DevOps is a process Like the active DevOps. A separate group is configured. This will solve theproblem. Manufacturers manufacturing production DevOps is a development-driven output management Q256) Lis t the main difference between active anddevOps. Agile: There is something about dynamic software developmentDevops: DevOps is about software deployment and management. DevOps does not replace the active or lean. By removingwaste, by removing gloves and improving regulations, it allows the production ofrapid and continuous products. Q257) For the popular scripting language of DevOps. Python
Q258) How does DevOps help developers
+
Correct the error and activate new features quickly. It provides clarity of clarity between the members of thegroup.
Q259) What is the speed and its benefits
+
Virtual virtual box has been used as a hyperversion forvirtual environments and in the current scenario it supports KVM. Kernel-basedvirtual machine Vegant is a toolfor creating and managing the environmentfor making software and experiments.
Q260) What is the use of Anuj
+
It is mainly used for information technology infrastructureto manage or use applications for remote applications. We want to sort an app onthe nodes of 100 by executing one command, then the animation is actually in thepicture, but you need to know or run some knowledge on the animatedscript.
Q1.What is Infrastructure as Code
+
Answer: Where the Configuration of any servers or toolchainor application stack required for an organization can be made into moredescriptive level of code and that can be used for provis ioning andmanage infrastructure elements like Virtual Machine, Software, NetworkElements, but it differs from scripts using any language, where theyare series of static steps coded, where Version controlcan be used inorder to track environment changes.Example Tools are Ansible,Terraform.
Q2.What are the areas the Version controlcan introduce toget efficient DevOps practice
+
Answer: Obviously the main area of Version Controlis Sourcecode management, Where every developer code should be pushed to the commonrepository for maintaining build and release in CI/CD pipelines.Another area canbe Version controlFor Adminis trators when they use Infrastructure as A Code(IAC) tools and practices for maintaining The Environment configuration.AnotherArea of Version Controlsystem Can be Artifactory Management Using Repositorieslike Nexus & DockerHub.
Q3.Why the Opensource tools boost DevOps
+
Answer: Opensource tools predominantly used by anyorganization which is adapting (or) adopted DevOps pipelines because devops camewith the focus of automation in various aspects of organization build andrelease and change management and also infrastructure managementareas.

So developing or using a single toolis impossible andalso everything is basically trial and error phase of development and also agilecuts down the luxury of developing a single tool, so opensource tools wereavailable on the market pretty much saves every purpose and also givesorganization an option to evaluate the toolbased on their need.

Q4.What is the difference between Ansible and chef(or)puppet
+
Answer: Ansible is Agentless configuration management tool,where puppet or chef needs agent needs to be run on the agent node and chef orpuppet is based on pull model, where your cookbook or manifest for chef andpuppet respectively from the master will be pulled by the agent and ansible usesssh to communicate and it gives data-driven instructions to the nodes need to bemanaged , more like RPC execution, ansible uses YAML scripting, whereas puppet(or) chef is built by ruby uses their own DSL .
Q5.What is folder structure of roles in ansible
+
Answer: roles common tasks handlers files templates varsdefaults meta webservers tasks defaultsmeta/

Where common is role name, under tasks– there will be tasks (or) plays present, handlers – to hold thehandlers for any tasks, files – static files for copying (or) moving toremote systems, templates- provides to hold jinja based templating , vars– to hold common vars used byplaybooks.

Q6. What is Jinja2 templating in Ansible playbooks and theiruse
+
Answer: Jinja2 templating is the Python standard fortemplating , think of it like a sed editor for Ansible , where it can be used is when there is a need for dynamic alteration of any config file to anyapplication like consider mapping a MySQL application to the IP address of themachine, where it is running, it cannot be static , it needs altering itdynamically at runtime .

Format

{{ foo.bar}}

The vars within the {{ }} braces are replaced by ansiblewhile running using templatemodule.

Q7. What is the need for organizing playbooks as the role,is it necessary
+
Answer: Organizing playbooks as roles , gives morereadability and reusability to any plays , while consider a task where MySQLinstallation should be done after the removal of Oracle DB , and anotherrequirement is needed to install MySQL after java installation, in both cases weneed to install MySQL , but without roles need to write playbooks separately forboth use cases , but using roles once the MySQL installation role is created canbe utilis ed any number of times by invoking using logic in site.yaml.

No, it is not necessary to create roles for every scenario,but creating roles is a best practice inAnsible.

Q8.What is main dis advantage of Docker containers
+
Answer: As the lifetime of any containers is while runningafter a container is destroyed you cannot retrieve any data inside a container,the data inside a container is lost forever, but persis tent storage for datainside containers can be done using volumes mount to an external source likehost machine and any NFS drivers.
Q9. What are docker engine and docker compose
+
Answer: Docker engine contacts the docker daemon inside themachine and creates the runtime environment and process for any container,docker composes links several containers to form as a stack used in creatingapplication stacks like a LAMP, WAMP,XAMP.
Q10. What are the Different modes does a container can berun
+
Answer: Docker container can be run in two modes Attached:Where it will be run in the foreground of the system you are running, provides aterminal inside to container when -t option is used with it, where every logwill be redirected to stdout screen. Detached: This mode is usually run inproduction, where the container is detached as a background process and everyoutput inside the container will be redirected log files inside/var/lib/docker/logs/<container-id>/<container-id.json>and which can be viewed by docker logs command.
Q11. What the output of docker inspect command willbe
+
Answer: Docker inspects <container-id> willgive output in JSON format, which contains details like the IP address of thecontainer inside the docker virtual bridge and volume mount information andevery other information related to host (or) container specific like theunderlying file driver used, log driver used. docker inspect [OPTIONS] NAME|ID[NAME|ID…] Options
Name, shorthand DefaultDescription
— format, -f Format the output using the given Gotemplate
–size , -s Dis play total file sizes if the type is container
–type Return JSON for specified type
Q12.What is the command can be used to check the resourceutilization by docker containers
+
Answer: Docker stats command can be used to check theresource utilization of any docker container, it gives the output analogous toTop command in Linux, it forms the base for container resource monitoring toolslike advis or, which gets output from docker stats command. docker stats[OPTIONS] [CONTAINER…] Options
Name, shorthand DefaultDescription
— all, -a Show all containers (default shows justrunning)
–format Pretty-print images using a Gotemplate
–no-stream Dis able streaming stats and only pull thefirst result
–no-trunc Do not truncate output
Q13.What is the major difference between Continuosdeployment and continuos delivery
+
Answer: Where continuos deployment is fully automated anddeploying to production needs no manual intervention in continuos deployment,whereas in continuos delivery the deployment to production has some manual intervention for change management in Organizationfor better management, and it needs to approved by manager or higher authoritiesto be deployed in production. According to your application ris k factor fororganization, the continuos deployment (or) delivery approach will be chosen.
Q14.How to execute some task (or) play on localhost onlywhile executing playbooks on different hosts on an ansible
+
Answer: In ansible, there is a module called delegate_to, inthis module section provide the particular host (or) hosts where your tasks (or)task need to be run. tasks:
– name: ” Elasticsearch Hitting” uri:url='{{ url2 }}_search
+
Q=status:new’headers='{“Content-type”:”application/json”}’method=GET return_content=yes regis ter: output delegate_to: 127.0.0.1
Q15. What is the difference between set_fact and vars inansible
+
Answer: Where a set_fact sets the value for a factor at onetime and remains static, even though the value is Quite dynamic and vars keep on changing as per the valuekeeps on changing for the variable. tasks: – set_fact: fact_time:“Fact: {{lookup(‘pipe’, ‘date\”+%H:%M:%S\”‘)}}” – debug: var=fact_time –command: sleep 2 – debug: var=fact_time tasks: – name: lookups invariables vs. lookups in facts hosts: localhost vars: var_time: “Var:{{lookup(‘pipe’, ‘date\”+%H:%M:%S\”‘)}}” Even though the lookup for date hasbeen used in both the cases , where in the vars is used it alters based on thetime to time every time executed within the playbook lifetime. But Fact alwaysremains same once lookup is done
Q16. What is the lookup in ansible and what are lookupplugins supported by ansible
+
Answer: Lookup plugins allow access of data in Ansible fromoutside sources. These plugins are evaluated on the Ansible controlmachine, andcan include reading the filesystem but also contacting external datastores andservices. Format is {lookup{‘<plugin>’,’<source(or)connection_string>’}}Some of the lookup plugins supported by ansible are File pipe redis jinjatemplates etcd kv store …
Q17. How can you delete the docker images stored on yourlocal machine and how can you do it for all the images at once
+
Answer: The command docker rmi <image-id> canbe used to delete the docker image from local machine , whereas some images mayneed to be forced because the image may be used by some other container (or)another image , to delete images you can use the combination of commands bydocker rmi $(docker images - Q) , where docker images will give the docker image names ,to get only the ID of docker images only , we are using - Q switch with docker images command.
Q18. What are the folders in the Jenkins installation andtheir uses
+
Answer: JENKINS_HOME – which will be/$JENKINS_USER/.jenkins it is the root folder of any Jenkins installation and itcontains subfolders each for different purposes. jobs/ – Folder containsall the information about all the jobs configured in the Jenkins instance.Inside jobs/, you will have the folder created for each job and inside thosefolders, you will have build folders according to each build numbers each buildwill have its log files, which we see in Jenkins web console. Plugins/ –where all your plugins will be lis ted. Workspace/ – this will be present to hold all theworkspace files like your source code pulled from SCM.
Q19. What are the ways to configure Jenkins system
+
Answer: Jenkins can be configured in two ways Web: Wherethere is an option called configure system , in there section you can make allconfiguration changes . Manual on filesystem: Where every change can also bedone directly on the Jenkins config.xml file under the Jenkins installationdirectory , after you make changes on the filesystem, you need to restart yourJenkins, either can do it directly from terminal (or) you can use Reloadconfiguration from dis k under manage Jenkins menu or you can hit /restartendpoint directly.
Q20. What is the role Of HTTP REST API in DevOps
+
Answer: As Devops is purely focuses on Automating yourinfrastructure and provides changes over the pipeline for different stages likean each CI/CD pipeline will have stages like build,test,sanitytest,UAT,Deployment to Prod environment as with each stage there are differenttools is used and different technology stack is presented and there needs to bea way to integrate with different toolfor completing a series toolchain, therecomes a need for HTTP API , where every toolcommunicates with different toolsusing API , and even user can also use SDK to interact with different tools likeBOTO for Python to contact AWS API’s for automation based on events ,nowadays its not batch processing anymore , it is mostly event drivenpipelines
Q21. What are Microservices, and how they power efficientDevOps practices
+
Answer: Where In traditional architecture , everyapplication is monolith application means that anything is developed by a groupof developers , where it has been deployed as a single application in multiplemachines and exposed to outer world using loadbalancers , where themicroservices means breaking down your application into small pieces , whereeach piece serves the different functionality needed to complete a singletransaction and by breaking down , developers can also be formed to groups andeach piece of application may follow different guidelines for efficientdevelopment phase , because of agile development should be phased up a bit andevery service uses REST API (or) Message Queues to communicate between other service. So build andrelease of a non-robust version may not affect whole architecture , instead somefunctionality is lost , that provides the assurance for efficient and fasterCI/CD pipelines and DevOps Practices
Q22. What are the ways that a pipeline can be created inJenkins
+
Answer: There are two ways of the pipeline can be created inJenkins Scripted Pipelines: More like a programming approach Declarativepipelines: DSL approach specifically for creating Jenkins pipelines. Thepipeline should be created in Jenkins file and the location can either be in SCMor local system. Declarative and Scripted Pipelines are constructedfundamentally differently. Declarative Pipeline is a more recent feature ofJenkins Pipeline which: Provides richer syntactical features over ScriptedPipeline syntax, and is designed to make writing and reading Pipeline codeeasier.
Q23. What are the Labels in Jenkins & where it canbe utilis ed
+
Answer: As with CI/CD solution needs to be centralized ,where every application in the organization can be built by a single CI/CDserver , so in organization there may be different kinds of application likejava , c#,.NET and etc , as with microservices approach your programming stackis loosely coupled for the project , so you can have Labels in each node and select the optionOnly built jobs while label matching this node , so when a build is scheduledwith the label of the node present in it , it waits for next executor in thatnode to be available , eventhough there are other executors in nodes.
Q24. What is the use of Blueocean in Jenkins
+
Answer: Blue Ocean rethinks the user experience of Jenkins.Designed from the ground up for Jenkins Pipeline, but still compatible withfreestyle jobs, Blue Ocean reduces clutter and increases clarity for everymember of the team. It provides sophis ticated UI to identify each stage of thepipeline and better pinpointing for is sues and very rich Pipeline editor forbeginners.
Q25. What is the callback plugins in ansible, give someexamples of some callback plugins
+
Answer: Callback plugins enable adding new behaviors toAnsible when responding to events. By default, callback plugins controlmost ofthe output you see when running the command line programs, but can also be usedto add additional output, integrate with other tools and marshall the events toa storage backend. So whenever an play is executed and after it produces someevents , that events are printed onto Stdout screen ,so callback plugin can beput into any storage backend for log processing. Example callback plugins areansible-logstash, where every playbook execution is fetched by logstash in theJSON format and can be integrated any other backend source likeelasticsearch.
Q26. What are the scripting languages can be used inDevOps
+
Answer: As with scripting languages , the basic shellscripting is used for build steps in Jenkins pipelines and python scripts can beused with any other tools like Ansible , terraform as a wrapper script for someother complex decis ion solving tasks in any automation as python is moresuperior in complex logic derivation than shell scripts and ruby scripts canalso be used as build steps in Jenkins.
Q27. What is Continuos Monitoring and why monitoring is verycritical in DevOps
+
Answer: Devops brings out every organization capablity ofbuild and release cycle to be much shorter with concept of CI/CD , where everychange is reflected into production environments fastly , so it needs to betightly monitored to get customer feedbacks. So the concept of continuosmonitoring has been used to evaluate each application performance in real time(atleast Near Real Time) , where each application is developed with applicationperformance monitor agents compatible and the granular level of metrics aretaken out like JVM stats and even fuctional wis e metrics inside the applicationcan also be poured out in real time to Agents , which in turn gives to anybackend storage and that can be used by monitoring teams in dashboards andalerts to get continuosly monitor the application
Q28. Give some examples of continuos monitoringtools
+
Answer: Where many continuos monitoring tools are availablein the market, where used for a different kind of application and deploymentmodel Docker containers can be monitored by cadvis or agent , which can be usedby Elasticsearch to store metrics (or) you can use TICK stack (Telegraf,influxdb,Chronograf,Kapacitor) for every systems monitoring in NRT(Near RealTime) and You can use Logstash (or) Beats to collect Logs from system , which inturn can use Elasticsearch as Storage Backend can use Kibana (or) Grafana asvis ualizer. The system monitoring can be done by Nagios and Icinga.
Q29. What is docker swarm
+
Answer: Group of Virtual machines with Docker Engine can beclustered and maintained as a single system and the resources also being sharedby the containers and docker swarm master schedules the docker container in anyof the machines under the cluster according to resource availability Dockerswarm init can be used to initiate docker swarm cluster and docker swarm joinwith the master IP from client joins the node into the swarm cluster.
Q30. What are the ways to create Custom Dockerimages
+
Answer: Docker images can be created by two ways broadlyDockerfile: Most used method , where base image can be specified and the filescan be copied into the image and installation and configuration can be doneusing declarative file which can be given to Docker build command to produce newdocker image. Docker commit: Where the Docker image is pinned up as aDocker container and every command execute inside a container forms a Read-onlylayer and after every changes is Done can use docker commit<container-iD> to save as a image, although this method is notsuitable for CI/CD pipelines , as it re Quires manual intervention.
Q31. Give some important directives in Dockerfile and anexample Dockerfile
+
Answer: FROM – Gives the base image to use. RUN– this directive used to run a command directly into any image. CMD- Torun the command, but the format of command specification is more arguments basedthan a single command like RUN. ADD (or) COPY – To copy files from yourlocal machine to Docker images you create. ENTRYPOINT- Entrypoint command keepsthe command without execution, so when the container is spawned from an image,the command in entry point runs first. Example Dockerfile FROM python:2 MAINTAINER janakiraman RUN mkdir /code ADD test.py /code ENTRYPOINT[“python”,”/code/test.py”] Q32. Give some important Jenkins Plugins Answer: SSH slaves plugin
  • PipelinePlugin
  • Github Plugin
  • Email notificationsplugin
  • Docker publis h plugin
  • Mavenplugin
  • Greenball plugin
  • Q33.What is the use of vaults in ansible
    +
    Answer: Vault files are encrypted files, which contains anyvariables used by ansible playbooks, where the vault encrypted files can bedecrypted only by the vault-password, so while running a playbook, if any vaultfile is used for a variable inside playbooks, so need to used–-ask-vault-pass command argument while running playbook.
    Q34. How does docker make deployments easy
    +
    Answer: Docker is a containerization technology, which is aadvanced technology over virtualization, where in virtualization, an applicationneeds to be installed in machine , then the OS should be spin up and spinning upVirtual machine takes lot time , and it divides space from Physical hardware andhypervis or layer wastes vast amount of space for running virtual machines andafter it is provis ioned, Every application needs to be installed andinstallation re Quires all dependencies and sometimes dependencies may mis sout even if you double check and migration from machine to machine ofapplications is painful , but docker shares underlying OS resources , wheredocker engine is lightweight and every application can be packaged withdependency once tested works everywhere same, migration of application orspinning up of new application made easy because just needs to install onlydocker in another machine and docker image pull and run does all the magic ofspinning up in seconds.
    Q35. How .NET applications can be built usingJenkins
    +
    Answer: .NET applications needs Windows nodes to built ,where Jenkins can use Jenkins windows slave plugin can be used to connectwindows node as a Jenkins slave , where it uses DCOM connector for Jenkinsmaster to slave connection (or) you can use Jenkins JNLP connector and the Buildtools and SCM tools used for the pipeline of .NET application needs to beinstalled in the Windows slave and MSBuild build toolcan be used to build .NETapplication and can be Deployed into Windows host by using Powershell wrapperinside Ansible playbooks.
    Q36. How can you make a High available Jenkins master-mastersolution without using any Jenkins plugin
    +
    Answer: Where Jenkins stores all the build information inthe JENKINS_HOME directory , which can be mapped to any NFS (or) SAN storagedrivers , common file systems and when the node is down , can implement amonitoring solution using Nagios to check alive , if down can trigger an ansibleplaybook (or) python script to create a new Jenkins master in different node andreload at runtime, if there is already a passive Jenkins master in anotherinstance kept silent with same JENKINS_HOME Network file store.
    Q37. Give the structure of Jenkins file
    +
    Answer: Jenkins filed starts with Pipeline directive ,inside the pipeline directive will be agent directive , which specifies wherethe build should be run and next directive would be stages , which containsseveral lis t of stage directives and each stage directive contains differentsteps . There are several optional directives like options , which providescustom plugins used by the projects (or) any other triggering mechanis ms usedand environment directive to provide all env variables Sample Jenkins filepipeline{ agent any stages { stage(‘Dockerbuild’) { steps { sh“sudo docker build. -t pyapp:v1” } } } }
    Q38. What are the uses of integrating cloud withDevOps
    +
    Answer: The centralized nature of cloud computing providesDevOps automation with a standard and centralized platform for testing,deployment, and production.Most cloud providers gives Even DevOps technologieslike CI tools and deployment tools as a service like codebuild, codepipeline,codedeploy in AWS makes easy and even faster rate of DevOps pratice.
    Q39. What is Orchestration of containers and what are thedifferent tools used for orchestration
    +
    Answer: When deploying into production, you cannot use asingle machine for production as it is not robust for any deployment , so whenan application is containerized, the stack of applications maybe run at singledocker host in development environment to check application functionality, whilewhen we arrive into production servers, that it is not the case, where youshould deploy your applications into multiple nodes and stack should beconnected between nodes , so to ensure network connectivity between differentcontainers , you need to have shell scripts (or) ansible playbooks betweendifferent nodes ,and another dis advantage is using this tools , you cannot runan efficient stack, where an application is taking up more resources in one node, but another sits idle most time , so deployment strategy also needs to beplanned out according to resources and load-balancing of this applications alsobe configured, so to clear out all this obstacles , there came a concept calledorchestration , where your docker containers is orchestrated between differentnodes in the cluster based on resources available according to schedulingstrategy and everything should be given as DSL specific files not like scripts.There are Different Orchestration tools available in market which areKubernetes,Swarm,Apache Mesos.
    Q40. What is ansible tower
    +
    Answer: Ansible is developed by Redhat , which provides ITautomation and configuration management purposes. Ansible Tower is the extendedmanagement layer created to manage playbooks organization using roles andexecution and can even chain different number of playbooks to form workflows.Ansible tower dashboard provides NOC-style UI to look into the status of allansible playbooks and hosts status.
    Q41. What are the programming language applications that canbe built by Jenkins
    +
    Answer: Jenkins is a CI/CD toolnot depends on anyProgramming language for building application, if there is a build toolto builtany language, that’s enough to build, even though plugin for build toolnot available, can use any scripting to replace your build stage like Shell,Powershell, Python scripts to make build of any language application.
    Q42. Why is every toolin DevOps is mostly has some DSL(Domain Specific Language)
    +
    Answer: DevOps is culture developed to address the needs ofagile methodology , where the developement rate is faster ,so deployment shouldmatch its speed and that needs operations team to co-ordinate and work with devteam , where everything can automated using script-based , but it feels morelike operations team than , it gives messy organization of any pipelines , morethe use cases , more the scripts needs to be written , so there are several usecases, which will be ade Quate to cover the needs of agile are taken and tools arecreated according to that and customiztion can happen over the toolusing DSL toautomate the DevOps practice and Infra management.
    Q43. What are the clouds can be integrated with Jenkins andwhat are the use cases
    +
    Answer: Jenkins can be integrated with different cloudproviders for different use cases like dynamic Jenkins slaves, Deploy to cloudenvironments. Some of the clouds can be integrated areAWS
  • Azure
  • GoogleCloud
  • OpenStack
  • Q44. What are Docker volumes and what type of volume shouldbe used to achieve persis tent storage
    +
    Answer: Docker volumes are the filesystem mount pointscreated by user for a container or a volume can be used by many containers , andthere are different types of volume mount available empty dir, Post mount, AWSbacked lbs volume, Azure volume, Google Cloud (or) even NFS, CIFS filesystems,so a volume should be mounted to any of the external drive to achieve persis tentstorage , because a lifetime of files inside container , is till the containeris present and if container is deleted, the data would be lost.
    Q45. What are the Artifacts repository can be integratedwith Jenkins
    +
    Answer: Any kind of Artifacts repository can be integratedwith Jenkins, using either shell commands (or) dedicated plugins, some of themare Nexus, Jfrog.
    Q46. What are the some of the testing tools that can beintegrated with jenkins and mention their plugins
    +
    Answer: Sonar plugin – can be used to integratetesting of Code Quality in your source code. Performance plugin – this can be used to integrate JMeter performance testing. Junit – to publis hunit test reports. Selenium plugin – can be used to integrate withselenium for automation testing.
    Q47. What are the build triggers available inJenkins
    +
    Answer: Builds can be run manually (or) either canautomatically triggered by different sources like Webhooks– The webhooksare API calls from SCM , whenever a code is committed into repository (or) canbe done for specific events into specific branches. Gerrit code reviewtrigger– Gerrit is an opensource code review tool, whenever a code changeis approved after review build can be triggered. Trigger Build Remotely –You can have remote scripts in any machine (or) even AWS lambda functions (or)make a post re Quest to trigger builds in Jenkins. Schedule Jobs- Jobs canalso schedule like Cron jobs. Poll SCM for changes – Where your Jenkinslooks for any changes in SCM for given interval, if there is a change, the buildcan be triggered. Upstream and Downstream Jobs– Where a build can betriggered by another job that is executed previously.
    Q48. How to Version controlDocker images
    +
    Answer: Docker images can be version controlled using Tags ,where you can assign tag to any image using docker tag <image-id>command. And if you are pushing any docker hub regis try without tagging thedefault tag would be assigned which is latest , even if a image with the latestis present , it demotes that image without tag and reassign that to the latestpush image.
    Q49. What is the use of Timestamper plugin inJenkins
    +
    Answer: It adds Timestamp to every line to the consoleoutput of the build.
    Q50.Why should you not execute a build on master
    +
    Answer: You can run a build on master in Jenkins , but it is not advis able , because the master already has the responsibility of schedulingbuilds and getting build outputs into JENKINS_HOME directory ,so if we run abuild on Jenkins master , then it additionally needs to build tools, andworkspace for source code , so it puts performance overload in the system , ifthe Jenkins master crashes , it increases the downtime of your build and releasecycle.
    Q51. Why devops
    +
    Answer: DevOps is the market trend now, which follows asystematic approach for getting the application live to market. DevOps is allabout tools which helps in building the development platform as well asproduction platform. Product companies are now looking at a Code as a serviceconcept in which the development skill is used to create a productionarchitecture with atmost no downtime.
    Q52. Why Ansible
    +
    Answer: A Configuration Management toolwhich is agentless.It works with key based or password based ssh authentication. Since it is agentless, we have the complete controlof the manipulating data. Ansible is also use for architecture provis ioning as it has modules which can talk to majorcloud platforms. I have mainly used for AWS provis ioning and application/systemconfig manipulations.
    Q53. Why do you think a Version controlsystem is necessaryfor DevOps team
    +
    Answer: Application is all about code, if the UI is notbehaving as expected, there could be a bug in the code. Inorder to track thecode updates, versioning is a must. By any chance if bug breaks the application, we should beable to revert it to the working codebase. Versioning helps to achievethis . Also, by keeping a track of code commits by individuals, itis very easy to find the source of the bug in the code.
    Q54. What role would you prefer to be in the DevOpsteam
    +
    Answer: Basically the following are prominent in DevOpsdepending upon the skillset. Architect Version ControlPersonnel Configuration controlTeam Build and Integration management Deployment Team. Testing People QA Q55. Architecture Monitoring Team Answer: In my opinion, everyone should owe to be anarchitech. with this course, I will be fir the role from 2 to 5. Everyone shouldunderstand the working of each role. Devops is a collective effort ratherindividual effect.
    Q56. Suppose you are put in to a project where you have toimplement devops culture, what will be your approach
    +
    Answer: Before thinking of DevOps, there should be a clearcut idea on what need to be implement and it should be done by the Seniorarchitect. If we take a simple example of shopping market : Output of this business will be a website which dis playsonline shopping items, and a payment platform for easy payment. Even though it looks simple, the background work is not thateasy, because a shopping cart must be : – 99.99% live Easy and fast processing of shopping items Easy and fast payment system. – Quick reporting to shopkeeper – Quick Inventory Management Fast customer interaction and many more DevOps has to be implement in each process and phase. Nextis the tools used for bringing the latest items in website with minimal timespan. Git, Jenkins, Ansible/Chef, AWS can be much of familiar tools with helpsin continuous delivery to market.
    Q57. Whether continuous deployment is possiblepractically
    +
    Answer: Ofcourse it is possible if we bring the Agility inevery phase of development and deployment. The release, testing and deploymentautomation should be so accurately finetuned
    Q58. What is Agility in devops basically
    +
    Answer: Agile is an iterative form of process whichfinalizes the application by fulfilling the checklis t. For any process, thereshould be set of checklis t inorder to standardize the code as well as the buildand deployment process. The lis t depends on the architecture of the applicationand business model.
    Q59. Why scripting using Bash, Python or any other languageis a must for a DevOps team
    +
    Answer: Even though we have numerous tools in devops, butthere will certain custom re Quirements for a project. In such cases, we have to make useof scripting and then integrate it with the tools.
    Q60. In AWS, how do you implement high availability ofwebsites
    +
    The main concept of high availability is that the websiteshould be live all the time. So we should avoid single point of failure, inorderto achieve this LoadBalancer can be used. In AWS, we can implement HA with LBwith AutoScaling methods.
    Q61.How to debug inside a docker container
    +
    Answer: The feature “docker exec” allows usersto debug a container
    Q62.What do you mean by Docker Engine
    +
    It is open source container build and management tool
    Q63.Why we need Docker
    +
    Answer: Applications were started to use Agile methodologywhere they build and deployed iteratively . Docker helps is deploying same binaries with dependenciesacross different environments with fraction of seconds
    Q64.What do you mean by Docker daemon
    +
    Answer: Docker Daemon Receives and processes incoming API reQuests from the CLI .
    Q65.What do you mean by Docker client
    +
    Answer: Command line tool– which is a docker binaryand it communicate to the Docker daemon through the Docker API.
    Q66.what do you mean by Docker Hub Regis try
    +
    Answer: It is a Public image regis try maintanined by Dockeritself and the Docker daemon talks to it through the regis try API
    Q67.How do you install docker on a debian Linux OS
    +
    Answer: sudo apt-get install docker.io
    Q68.What access does docker group have
    +
    Answer: The docker user have root like access and we shouldrestrict access as we would protect root
    Q69.How to lis t the packages installed in Ubuntu container
    +
    Answer: dpkg -l lis ts the packages installed in ubuntucontainer
    Q70.How can we check status of the latest runningcontainer
    +
    Answer: With “docker ps -l” commandlis t latest running processes
    Q71.How to Stop a container
    +
    Answer: “docker kill “command to kill acontainer “docker stop “command to stop a container
    Q72.How to lis t the stopped containers
    +
    Answer: docker ps -a ( –an all)
    Q73.What do you mean by docker image
    +
    Answer: An image is a collection of files and its meta data, basically those files are the root filesystem of the container Image is madeup of layers where each layer can be edited Q74.What is the differences between containers andimages Answer: An image is an read-only filesystem where containeris a running form of an image . Image is non-editable and on containers we can edit as wewis h & save that again to a new image
    Q75.How to do changes in a docker image
    +
    Answer: No we can’t do changes in an image. we canmake changes in a Dockerfile or to the exis ting container to create a layerednew image
    Q76.Different ways to create new images
    +
    Answer: docker commit: to create an image from a containerdocker build: to create an image using a Dockerfile
    Q77.Where do you store and manage images
    +
    Answer: Images can be stored in your local docker host or ina regis try .
    Q78.How do we download the images
    +
    Answer: Using “docker pull” command we candownload a docker image
    Q79. What are Image tags
    +
    Answer: Image tags are variants of Docker image .“latest” is the default tag of an image
    Q80.What is a Dockerfile.
    +
    Answer: A Dockerfile series of instructions to build adocker image Docker build command can be used to build
    Q81.How to build a docker file
    +
    Answer: docker build -t <image_name>
    Q82.How to view hostory of a docker image
    +
    Answer: The docker his tory command lis ts all the layers inan image with image creation date, size and command used
    Q83.What are CMD and ENTRYPOINT
    +
    Answer: These will allow using the default command to beexecuted when a container is starting
    Q84.EXPOSE instruction is used for
    +
    Answer: The EXPOSE command is used to publis h ports of adocker container
    Q85.What is Ansible
    +
    Answer: A configuration management toolsimilar to a puppet, chef etc .
    Q86.Why to choose Ansible
    +
    Answer: Ansible is simple and light where it needs only shhand python as a dependency . It doesnt re Quired an agent to be installed
    Q87.What are the ansible modules
    +
    Answer: Ansible “modules” are pre-defined smallset of codes to perform some actions eg: copy a file, start a service
    Q88.What are Ansible Tasks
    +
    Answer: Tasks are nothing but ansible modules with thearguments
    Q89.What are Handlers in ansible
    +
    Answer: Handlers are triggered when there is need in changeof state e.g.restart service when a property file have changed.
    Q90.What are Roles in ansible
    +
    Answer: Roles are re-usable tasks or handlers.
    Q91.What is YAML
    +
    Answer: YAML – yet another markup language is way ofstoring data in a structured text format like JSON
    Q92.What are Playbooks
    +
    Answer: Playbooks are the recipes to ansible
    Q93.What is MAVEN
    +
    Answer: Maven is a Java build tool, so you must have Javainstalled to proceed.
    Q94.What do you mean by validate in maven
    +
    Answer: Validate is to check whether the info provided arecorrect and all necessary is available
    Q95.What do you mean by compile in maven
    +
    Answer: It is to compile the source code of theproject
    Q96.What do you mean by test in maven
    +
    Answer: It is to test the source code to test using suitabletesting framework
    Q97.What do you mean by package in maven
    +
    Answer: It is to do the binary packaging of the compiledcode
    Q98.What is docker-compose
    +
    Answer: Compose is to define and run a multi-containerapplication
    Q99.What is Continuous integration
    +
    Answer: CI is nothing but giving immediate feedback to thedeveloper by testing , analyzing the code .
    Q100. What is Continuous delivery
    +
    Answer: Continuous delivery is a continuation of CI whichaims in delivering the software until pre -prod automatically
    Q101.What is Continuous deployment
    +
    Answer: Continuous deployment is next step after CI and CDwhere the tested software will be provide to the end customers post somevalidation and change management activities
    Q102.What is git
    +
    Answer: git is a source code version management system.
    Q103.What is git commit
    +
    Answer: git commit records changes done to file in the localsystem.
    Q104.what is git push
    +
    Answer: git push is to update the changes to the remoterepository in the internet .
    Q105.What’s git fetch
    +
    git fetch will pull only the data from the remote repo butdoesnt merge with the repo in your local system.
    Q106.What is git pull
    +
    Answer: git pull will download the files from the remoterepo and will merge with the files in your local system.
    Q107.How to reset the Last git Commit
    +
    Answer: “git reset” command can be used to undolast commit .
    Q108.What is the need for DevOps
    +
    Answer: Start the answer by explaining general market trend,how releasing small features benefits compared to releasing big features,advantages of releasing small features in high fre Quency. Dis cuss about the topics such as Increase deploymentfre Quency
  • Lower failure rate of newerreleases
  • Reduced time for bug fixes
  • Timeto recovery
  • Q109. Write the key components of DevOps
    +
    Answer: These are te key comonents of DevOps. ContinuousIntegration
  • ContinuousTesting
  • Continuous Delivery
  • ContinuousMonitoring
  • Q110. What are the various tools used in DevOps
    +
    Answer: DevOps contains various stages. Each stage can beachieved with various tools. Below are the various toolthat are popularly usedtools in DevOps. Version Control: Git , SVN
  • CI/CD :Jenkins
  • Configuration Management Tools : Chef, Puppet,Ansible
  • Containerization Tool: Docker
  • Also mention any other tools that you worked on that helped you to automate theexis ting environment
    Q111. What is Version Control
    +
    Answer: Version ControlSystem (that are made to the filesor documents over a period of time.
    Q112. What are the types of Version ControlSystems
    +
    Answer: There are two types of Version ControlSystems:Central Version ControlSystem, Ex: Git,Bitbucket
  • Dis tributed/Decentralized Version ControlSystem
  • Q113. What is jenkinsIn jenkins, what is the programminglanguage should be used
    +
    Answer: It is a open Source automation tool. it is a puposeof Continuous Integration and Continuous Delivery. Jenkins is a written in javaProgramming language. Q114. Give an explanation about DevOps. Answer: DevOps is nothing but a practice that emphasizes thecollaboration and communication of both software developers and implementationteam. It focuses on delivering software product faster and lowering the failurerate of releases.
    Q115. What are the key Principles or Aspects behindDevOps
    +
    Answer: The key Principles or Aspects are Infrastructure ascode
  • Continuousdeployment
  • Automation
  • Monitoring
  • Security
  • Q116. Describe the core operations of DevOps withInfrastructure and with application. Answer: The core operations of DevOps are InfrastructureProvis ioning
  • Configuration
  • Orchestration
  • Deployment
  • Application development Code building
  • Codecoverage
  • Unittesting
  • Packaging
  • Deployment
  • Q117. How “Infrastructure code” is processed orexecuted in AWS
    +
    Answer: In AWS, Infrastructure code will be in simple JSONformat After that JSON code will be organized into files called templates You can Implement the templates on AWS DevOps and thenmanaged as stacks At last the creating, deleting, updating, etc. operation inthe stack are done by Cloud Formation
    Q118. Which scripting language is most important for aDevOps engineer
    +
    Answer: It is very important to choose the simplest languagefor DevOps engineer. Python Language is most suitable language forDevOps.
    Q119. How DevOps helps developers
    +
    Answer: Developers can fix bug and implement new featureswith less time by the help of DevOps. DevOps can also help to build a perfectcommunication system in a team with every team member.
    Q120. Which are popular tools for DevOps
    +
    Answer: Popular tools for DevOps areJenkins
  • Nagios
  • Monit
  • ELK(Elasticsearch, Logstash,Kibana)
  • Jenkins
  • Docker
  • Ansible
  • Git
  • Q121. What is the usefulness of SSH
    +
    Answer: SSH is used to log into a remote machine and work onthe command line and also used it to dig into the system to make possible securecoded communications between two untrusted hosts over an insecurenetwork.
    Q122. How you would handle revis ion (version)control
    +
    Answer: I will post the code on SourceForge or GitHub togive avis ual for everyone. I will post the checklis t also from the last revis ionto make sure that any unsolved is sues are resolved.
    Q123. How many types of Http re Quests are
    +
    Answer: The types of Http re Quests are GET
  • HEAD
  • PUT
  • POST
  • PATCH
  • DELETE
  • TRACE
  • CONNECT
  • OPTIONS
  • Q124.If a Linux-build-server suddenly starts getting slowwhat will you check
    +
    Answer: If a Linux-build-server suddenly starts gettingslow, I will check for following three things Application Level troubleshooting:is sues related with RAM, is sues related with Dis k I/O read write, is sues relatedwith Dis k space, etc. System-Level troubleshooting: Check for Application logfile OR application server log file, system performance is sues, Web Server Log– check HTTP, tomcat log, etc. or check jboss, WebLogic logs to see if theapplication server response/receive time is the is sues for slowness, Memory Leakof any application Dependent Services troubleshooting: is sues related withAntivirus, is sues related with Firewall, Network is sues, SMTP server responsetime is sues, etc Q125. Describe the key components of DevOps. The most important DevOps components are: ContinuousIntegration
  • ContinuousTesting
  • Continuous Delivery
  • ContinuousMonitoring
  • Q126. Give example of some popular cloud platform used forDevOps Implementation. Answer: For DevOps implementation popular Cloud platformsare: Google Cloud
  • Amazon WebServices
  • Microsoft Azure
  • Q127. Describe benefits of using Version Controlsystem. Answer: Version Controlsystem gives scope to team membersto work on any file at suitable time. All the previous versions and variants areclosely packed up inside the VCS. You can use dis tributed VCS to store the complete projecthis tory in case central server breakdown you can use your team member’sfile location storage related with the project. You can see the actual changes made in the file’scontent.
    Q128. How Git Bis ect helps
    +
    Answer: Git bis ect helps you to find the commit whichintroduced a bug using binary search.
    Q129. What is the build
    +
    Answer: Build is a method in which you can put source codetogether for checking that is the source code working as a single unit. In thebuild creation process, the source code will undergo compilation, inspection,testing, and deployment.
    Q130. What is Puppet
    +
    Answer: Puppet is a project management toolwhich helps youto convert the adminis tration tasks automatically.
    Q131.What is two-factor authentication
    +
    Answer: Two-factor authentication is a security method inwhich the user provides two ways of identification from separatecategories.
    Q132. What is ‘Canary Release’
    +
    Answer: It is a pattern which lowers the ris k of new versionsoftware introduction into the production environment. User will get“Canary Release” in a controlled manner before making it availableto the complete user set.
    Q133.What are the important types of testing re Quired toensure new service is ready for production
    +
    Answer: You need to run continuous testing to make sure thenew service is ready for production.
    Q134. What is Vagrant
    +
    Answer: Vagrant is a toolused to create and manage avirtual version of computing environments for tests and softwaredevelopment. Q135. Usefulness of PTR in DNS. Answer: PTR or Pointer record is used for reverse DNSlookup.
    Q136. What is Chef
    +
    Answer: Chef is a powerful automation platform used fortransforming infrastructure into code. In this tool, you can use write scriptsthat are used to automate processes. Q137. Prere Quis ites for the implementation of DevOps. Answer: Following are the useful prere Quis ites for DevOps Implementation: At least one VersionControlSoftware (VCS).
  • Establis h communication between theteam members
  • Automated testing
  • Automateddeployment
  • Q138. For DevOps success which are the bestpractices
    +
    Answer: Here, are essential best practices for DevOpsimplementation: The speed of delivery means time taken for any task to get theminto the production environment.
  • Track the defects are foundin the various
  • It’s important to calculate the actualor the average time taken to recover in case of a failure in the productionenvironment.
  • Get a feedback from the customer about bugreport because it also affects the Quality of application.
  • Q139. How SubGit toolhelps
    +
    Answer: SubGit helps you to move SVN to Git. You can build awritable Git mirror of a local or alien to Subversion repository by usingSubGit. Q140. Name some of the prominent network monitoringtools. Answer: Some most prominent network monitoring tools are:Splunk
  • Icinga2
  • Wireshark
  • Nagios
  • OpenNMS
  • Q141. How do you know if your video card can run UnityAnswer: When you use a command
    +
    1 /usr/lib/Linux/unity_support_test- p it will give detailed output about Unity’s re Quirements, and if they are met, then your video card canrun unity.
    Q142. How to enable startup sound in Ubuntu
    +
    Answer: To enable startup sound Click controlgear and thenclick on Startup Applications In the Startup Application Preferences window,click Add to add an entry Then fill the information in comment boxes like Name,Command, and Comment 1 /usr/bin/canberra-gtk- play—id=“desktop-login”—description= “play login sound”Logout and then login once you are done You can use shortcut key Ctrl+Alt+T toopen .
    Q143. Which is the fastest way to open an Ubuntu terminal ina particular directory
    +
    Answer: To open an Ubuntu terminal in a particulardirectory, you can use custom keyboard short cut. To do that, in the commandfield of a new custom keyboard, type genome – terminal – –working – directory = /path/to/dir.
    Q144. How could you get the current colour of the currentscreen on the Ubuntu desktop
    +
    Answer: You have to open the background image in The Gimp(image editor) and use the dropper tool to select the colour on a selectedpoint. It gives you the RGB value of the colour at that point.
    Q145. How can you create launchers on a desktop inUbuntu
    +
    Answer: You have to use ALT+F2 then type”gnome-desktop-item-edit –create-new~/desktop,” it will launch theold GUI dialog and create a launcher on your desktop in Ubuntu.
    Q146. Explain what Memcached is
    +
    Answer: Memcached is an open source and free,high-performance, dis tributed memory object caching system. The primaryobjective of Memcached is to increase the response time for data otherwis e itcan be recovered or constructed from some other source or database. Memcached is used to reduce the necessity of S QL database operation or another source repetitively tocollect data for a simultaneous re Quest. Memcached can be used for SocialNetworking->Profile Caching
  • Content Aggregation -> HTML/ Page Caching
  • Ad targeting ->Cookie/profile tracking
  • Relationship -> Sessioncaching
  • E-commerce -> Session and HTMLcaching
  • Location-based services - > Database Query scaling
  • Gaming and entertainment-> Session caching
  • Memcache helps in Makeapplication processes much faster
  • Memcached make the objectselection and rejection process
  • Reduce the number ofretrieval re Quests to the database
  • Cuts down the I/O( Input/Output) access (hard dis k)
  • Drawback of Memcachedis It is not a preserving data store
  • Not adatabase
  • It is not an applicationspecific
  • Unable to cache largeobject
  • Q147. Mention some important features of Memcached
    +
    Answer: Important features of Memcached includes CAS Tokens:A CAS token is attached to an object retrieved from a cache. You can use thattoken to save your updated object.
  • Callbacks: It simplifiesthe code
  • getDelayed: It decrease the time consumption ofyour script, waiting for results to come back from aserver
  • Binary protocol: You can use binary protocolinsteadof ASCII with the newer client
  • Igbinary: A client always hasto do serialization of the value with complex data previously, but now withMemcached, you can use igbinary option.
  • Q148. is it possible to share a single instance of aMemcache between multiple projects
    +
    Answer: Yes, it is possible to share a single instance ofMemcache between multiple projects. You can run Memcache on more than one serverbecause it is a memory store space. You can also configure your client to speakto a particular set of case. So, you can run two different Memcache processes onthe same host independently.
    Q149. You are having multiple Memcache servers, one of thememcache servers fails, and it has your data, can you recover key data from theperticular failed server
    +
    Answer: Data won’t be removed from the server butthere is a solution for auto-failure, which you can configure for multiplenodes. Fail-over can be triggered during any socket or Memcached server levelerrors and not during standard client errors like adding an exis ting key,etc.
    Q150. How can you minimize the Memcached serveroutages
    +
    Answer: If you write the code to minimize cache stampedesthen it will leave a minimal impact
  • Another way is to bringup an instance of Memcached on a new machine using the lost machines IPaddress
  • The code is another option to minimize serveroutages as it gives you the liberty to change the Memcached server lis t withminimal work
  • Setting timeout value is another option thatsome Memcached clients implement for Memcached server outage. When yourMemcached server goes down, the client will keep trying to send a re Quest till the time-out limit is reached
  • Q151. How can you update Memcached when data changes
    +
    Answer: When data changes you can update Memcached byClearing the Cache proactively: Clearing the cache when an insert or update is made Resetting the Cache: this method is similar with previousone but without delete the keys and wait for the next re Quest for the data to refresh the cache, reset the valuesafter the insert or update.
    Q152. What is Dogpile effectWhat is the prevention of this effect
    +
    Answer: When a cache expires, and websites are hit by themultiple re Quests made by the client at the same time the Dogpileeffect occurs. You have to use semaphore lock to prevent the effect. In this system after value expires, the first process ac Quires the lock and starts generating new value.
    Q153. How Memcached should not be used
    +
    Answer: You have to use Memcached as cache; don’t useit as a data store.
  • Don’t use Memcached as theultimate source of information to run your application. You must always have anoption of data source in your hand.
  • Memcached is basically avalue store and can’t perform a Query over the data or go through again over the contents toextract information.
  • Memcached is not secure either inencryption or authentication.
  • Q154. When a server gets shut down does data stored inMemcached is still available
    +
    Answer: No after a server shuts down and then restart thestored data in Memcached will be deleted because Memcached is unable to storedata for long time.
    Q155. What are the difference between Memcache andMemcached
    +
    Answer: Memcache: It is an extension that allows you to workthrough handy object-oriented (OOP’s) and procedural interfaces. It is designed to reduce database load in dynamic webapplications.
  • Memcached: It is an extension that uses thelibmemcached library to provide API for communicating with Memcached servers. Itis used to increase the dynamic web applications by reducing database load. Itis the latest API.
  • Q156. Explain Blue/Green Deployment Pattern Answer: Blue/Green colouring pattern is one of the hardestchallenge faced at the time of automatic deployment process. In Blue/ GreenDeployment approach, you need to make sure two identical productionenvironments. Only one among them is LIVE at any given point of time and it is called Blue environment. After take the full preparation to release the software theteam conducts the final testing in an environment called Green environment. Whenthe verification is complete the traffic is routed to the Greenenvironment.
    Q157. What are the containers
    +
    Answer: Containers are from of lightweight virtualizationand create separation among process.
    Q158. What is post mortem meeting with reference toDevOps
    +
    Answer: In DevOps Post mortem meeting takes place to dis cussabout the mis takes and how to repair the mis takes during the totalprocess.
    Q159. What is the easiest method to build a smallcloud
    +
    Answer: VMfres is one of the best options to built IaaScloud from Virtual Box VMs in lesser time. But if you want lightweight PaaS,then Dokku is a better option because bash script can be PaaS out of Dokkucontainers. Q160. Name two tools you can use for dockernetworking. Answer: You can use Kubernetes and Docker swarm tools fordocker networking. Q161. Name some of DevOps Implementation area Answer: DevOps are used for Production, Production feedback,IT operation, and its software development.
    Q162. What is CBD’
    +
    Answer: CBD or Component-Based Development is a uni Que way to approach product development. In this method,Developers don’t develop a product from scratch, they look for exis tingwell defined, tested, and verified components to compose and assemble them to aproduct. Q163. Explain Pair Programming with reference toDevOps Answer: Pair programming is an engineering practice ofExtreme Programming Rules. This is the process where two programmers work on thesame system on the same design/algorithm/code. They play two different roles inthe system. One as a“driver” and other as an “observer”.Observer continuously observes the progress of a project to identify problems.T hey both can change their roles in a step of theprogram.
    Q1). Describe what DevOps is
    +
    DevOps is the new buzz in the IT world, swiftly spreadingall through the technical space. Like other new and popular technologies, peoplehave contradictory impressions of what DevOps is exactly. The main objective ofDevOps is to alter and improve the relationship between the development and ITteams by advocating better inter-communication and smoother collaborationbetween two units of an enterpris e.
    Q2). What is the programming language used in DevOps
    +
    Python is used in DevOps.
    Q3). What is the necessity of DevOps
    +
    Corporations are now facing the necessity of carryingquicker and improved requests to see the ever more persis tent demands of mindfulusers to decrease the “Time to Marketplace.“ DevOps often benefitsplacement to occur very profligately.
    Q4). Which are the areas where DevOps is implemented
    +
    By the passage of time, the need for DevOps is continuouslyincreasing. However, these are the main areas it is implemented in- Areas of Production Development areas productionfeedback development of IT Operations
    Q5). What is agile expansion and Scrum
    +
    Agile growth used as a substitute for Waterfall developmenttraining. In Agile, the expansion process is more iterative and additive; thereare more challenging and response at every stage of development as opposed toonly the latter stage in Waterfall. Scrum is used to accomplis h compositesoftware and product growth, using iterative and additive performs. Scrum hasthree roles: Product owner Scrum master Team
    Q6). Name a few most famous DevOps tools
    +
    The most prevalent DevOps tools are stated below: Puppet Chef Ansible Git Nagios DockerJenkins
    Q7). Can we consider DevOps as an agile practice
    +
    Yes, DevOps is considered as an agile practice wheredevelopment is driven by profound changing demands of professionals to stickcloser to the corporate needs and requirements
    Q8). What is DevOps engineer’s responsibilityconcerning Agile development
    +
    DevOps specialis t exertion very methodically with Agiledevelopment teams to assurance they have a condition essential to supportpurposes such as automatic testing, incessant Integration, and unceasingDelivery. DevOps specialis t must be in continuous contact with the developersand make all compulsory parts of environment work flawlessly.
    Q9). Why is Incessant Testing significant for DevOps
    +
    You can respond to this question by saying, “IncessantTesting permits any change made in the code to be tested directly. This circumvents the glitches shaped by having “big-bang” testingleft-hand to the end of the series such as announcement postponements andquality matters. In this way, Incessant Testing eases more recurrent and goodclass releases.”
    Q10). What do you think is the role of SSH
    +
    SSH is a Secure Shell which gives the users a very secure aswell encrypted mechanis m to safely log into systems and ensures the safetransfer of files. It aids in the process of logging out of a remote machinealong with the work on the command line. It helps in securing an encrypted andprotected end to end communications between two hosts communicating over aninsecure network.
    Q11). How will you differentiate DevOps from Agile
    +
    Agile is the technology which is all about softwaredevelopment, whereas DevOps is the technology used for software deployment andmanagement.
    Q12). What are the benefits of DevOps when seen from theTechnical and Business viewpoint
    +
    The Technical assis tance features of DevOps can be givenas: Software delivery is incessant. Decreases Difficulty inproblems. Quicker approach to resolve problems Workforce is abridged. Business welfare features: A high degree of bringing its features Steady operatingenvironments More time increased to Add values. Allowing quicker feature time to market
    Q13). Why do you think DevOps is developers friendly
    +
    DevOps is developers friendly because it fixes the bugs andimplements the new features very smoothly quickly. It is amazing because itprovides the much-needed clarity of communication among team members.
    Q14). What measures would you take to handle revis ion(version) control
    +
    To manage a successful revis ion control, you are required topost your code on SourceForge or GitHub so that everyone on the team can view itfrom there and also there is an option for viewers to give suggestions for thebetter improvement of it. Q15). Lis t a few types of HTTP requests. A few types of Http requests are” GET HEAD PUT POST PATCH DELETE TRACECONNECT OPTIONS Q16). Explain the DevOps Toolchain. Here is the DevOps toolchain- Code Build Test Package Release ConfigureMonitor Q17). Elucidate the core operations of DevOps concerningdevelopment and Infrastructure. Here is a lis t of the core operations of DevOps: Unit testing Packaging Code coverage Code developing Configuration Orchestration Provis ioningDeployment
    Q18). Why do you think there is a need for ContinuousIntegration of Development & Testing
    +
    Continuous Integration of Development and Testing enhancesthe quality of software and highly deducts the time which is taken to deliverit, by replacing the old-schoolpractice of testing only after completing allthe development process. Q19). Name a few branching strategies used in DevOps A few branching strategies to be used are- Feature Branching Task Branching Release Branching
    Q20). What is the motive of GIT tools in DevOps
    +
    Read: What is the Difference between Agile and DevOps The primary objective of Git is to efficaciously manage aproject or a given bundle of files as they keep on changing over time. Git toolstores this important information in a data structure kind of thing called a Gitrepository.
    Q21). Explain what the major components of DevOpsare
    +
    The major components of DevOps are continuous integration,continuous delivery, continuous integration, and continuous monitoring.
    Q22). What steps should be taken when Linux-based-serversuddenly gets slow
    +
    When a Linux-based-server suddenly becomes slow, then youshould focus on three things primarily: Application level troubleshooting System leveltroubleshooting Dependent level troubleshooting
    Q23). Which cloud platforms can be used for the successfulDevOps implementation
    +
    Cloud platforms that can be used for the successful DevOpsimplementation are given as: Google Cloud Amazon Web Services Microsoft Azure
    Q24). What is a Version ControlSystem (VCS)
    +
    VCS is a software application that helps software developersto work together and maintain the complete his tory of their work.
    Q25). What are the significant benefits of VCS (VersionControlSystem)
    +
    The significant benefits of using VCS can be givenas: It allows team members to work simultaneously. All past variants and versions are packed within VCS. A dis tributed VCS helps you to store the complete his tory ofthe project. In case of a breakdown of the central server, you may use the localGIT repository.
    It allows you to see what exact changes are made to thecontent of a file. Q26). What is a Git Bis ect
    +
    Git Bis ect helps you to find the commit which introduced abug using the binary search. Here is the basic syntax for a Git Bis ect: Gitbis ect
    Q27). What do you understand by the term build
    +
    A build is a method in the source code where the source codeis put together to check how it works as a single unit. In the complete process,the source code will undergo compilation, testing, inspection, anddeployment.
    Q28). As per your experience, what is the most importantthing that DevOps helps to achieve
    +
    The most important thing that DevOps helps us to achieve is to get the changes in a product quickly while minimizing ris ks related tosoftware quality and compliance. Other than this , there are more benefits ofDevOps that include better communication, better collaboration among teammembers, etc. Q29). Dis cuss one use case where DevOps can be implementedin the real-life. Etsy is a Company that focuses on vintage, handmade, anduniquely manufactured items. There are millions of Etsy users who are sellingproducts online. At this stage, Etsy decided to follow a more agile approach.DevOps helped Etsy with a continuous delivery pipeline and fully automateddeployment lifecycle. Q30). Explain your understanding of both the softwaredevelopment side and technical operations side of an organization you haveworked in the past recently. The answer to this question may vary from person to person.Here, you should dis cuss the experience of how flexible you were in your lastCompany. free DevOps demo DevOps Interview Questions and Answers for advancedworkforce In this section, we will be dis cussing interview questionsfor experienced people having more than three years of experience. Before you gothrough questions directly, take this quiz first to become a little moreconfident in your skills.
    Q31). What are the anti-patterns in DevOps
    +
    A pattern is used by others, not by organizations and youcontinue blindly follow it. You are essentially adopting anti-patternshere.
    Q32). What is a Git Repository
    +
    It is a version controlsystem that tracks changes to a fileand allows you to revert to any particular changes.
    Q33). In Git, how to revert a commit that has already beenmade public
    +
    Remove or fix the commit and push it to the remoterepository. This is the most natural style to fix an error. To do this , youshould use the command given below: Git commit –m “commitmessage” Create a new commit that undergoes all changes that weremade in the bad commit. Git revert
    Q34). What is the process to squash last N number of commitsinto a single commit
    +
    There are two options to squash last N number of commitsinto a single commit. To write a new commit message from scratch, you should usethe following command: git reset –soft HEAD ~N && git commit To edit the exis ting message, you should extract thosemessages first and pass them to the Git commit for later usage. Git reset–soft HEAD ~ N&& git commit –edit –m “$(git log–format=%B –reverse .HEAD {N})”
    Q35). What is Git rebase and how to use it for resolvingconflicts in a feature branch before merging
    +
    Git Rebase is a command that is used to merge another branchto the exis ting branch where you are working recently. It moves all localcommits at the top of the his tory of that branch. It effectively replays thechanges of feature branch at the tip of master and allowing conflicts to beresolved in the process. Moreover, the feature branch can be merged to the masterbranch with relative ease and sometimes considered as the fast-forwardoperation.
    Q36). How can you configure a git repository to run codesanity checking tools right before making commits and preventing them if thetest fails
    +
    Sanity or smoke test determines how to continue the testingreasonably. This is easy configuring a Git repository to run code sanitychecking before making commits and preventing them if the test fails. It can bedone with a simple script as mentioned below: #!/bin/sh file=$(git diff -cached -name-only -diff-filter=ACM | grep'.go$') if [ -z file]; then exit 0 fi unfmtd=$(gofmt -I $files) if [ -z unfmtd]; then exit 0 fi eacho "some .go files are not fmt'd" exit1
    Q37). How to find a lis t of files that are changed in acertain manner
    +
    To get a lis t of files that are changed or modified in aparticular way, you can use the following command: git diff-tree -r{hash}
    Q38). How to set up a script every time a repositoryreceives new commits from a push
    +
    There are three techniques to set up a script every time arepository receives new commits from Push. These are the pre-receive hook,post-receive hook, and update hook, etc. Q39). Write commands to know in Git if a branch is merged tothe master or not. Here are the commands to know in Git if a branch is mergedto the master or not. To lis t branches that are merged to the current branch,you can use the following command: git branch -merged To lis t branches that are not merged to the current branch,you can use the following command: git branch – no-merged
    Q40). What is continuous integration in DevOps
    +
    It is a development practice that requires developers tointegrate code into a shared repository multiple times a day. Each check-in is verified with an automated build allowing teams to detect problems early.
    Q41). Why is continuous integration necessary for thedevelopment and testing team
    +
    It improves the quality of software and reduces the overalltime to product delivery, once the development is complete. It allows thedevelopment team to find and locate bugs at an early stage and merge them to theshared repository multiple times a day for automating testing.
    Q42). Are there any particular factors included incontinuous integration
    +
    These following points you should include to answer this question: Automate the build and maintain a code repository. Make thebuild self-tested and fast. Testing should be done in a clone of the productionenvironment. It is easy getting the latest deliverables.
    Automate the deployment, and everyone should be able tocheck the result of the latest build. Q43). What is the process to copy Jenkinsfrom one server to another
    +
    There are multiple ways to copy Jenkins from one server toanother. Let us dis cuss them below: You can move the job from one Jenkin installation to anotherby simply copying the corresponding job directory. Make a copy of the exis ting job and save it with a differentname in the job directory.
    Rename the exis ting job and make necessary changes as perthe requirement. Q44). How to create a file and take backups in Jenkins
    +
    For taking backup in Jenkins, you just need to copy thedirectory and save it with a different name. Q45). Explain the process to set up jobs in Jenkins. Go to the Jenkins page at the top, select the “newjob” option, and choose “Build a free-style softwareproject.” Select the optional SCM where your source coderesides. Select the optional triggers to controlwhen Jenkinsperforms builds. Choose the preferable script that can be used to make thebuild. Collect the information for the build and notify peopleabout the build results. Q46). Name a few useful plugins in Jenkins. Some popular plugins in Jenkins can be given as:
    Read: What is GitGit Tutorial Guide for Beginners Maven 2project
    +
    Amazon EC2 HTML publis her Copy artifact Join Green Balls
    Q47). How will you secure Jenkins
    +
    Here are a few steps you should follow to secure theJenkins: Make sure that global security option is on and Jenkins is integrated with the company’s user directory with appropriate logindetails. Make sure that the project matrix is enabled for the finetune access. Automate the process of setting privileges in Jenkins withcustom version-controlled scripts. Limit the physical access to Jenkinsdata/folders. Run the security audits periodically. Jenkins is one of the popular tools used extensively inDevOps and hands-on training in Jenkins can make you an expert in the DevOpsdomain.
    Q48). What is continuous testing in DevOps
    +
    It is the process of executing automated tests as part ofsoftware delivery to receive immediate feedback within the latest build. In this way, each build can be tested continuously allowing the development team to getfaster feedback and avoid potential problems from progressing to the next stageof the delivery cycle.
    Q49). What is automation testing in DevOps
    +
    It is the process of automating the manual process fortesting an application under test (AUT). It involves the usage of differenttesting tools that lets you creating test scripts that can be executedrepeatedly and does not require any manual intervention.
    Q50). Why is automation testing significant inDevOps
    +
    The automation testing is significant for the followingreasons in DevOps: It supports the execution of repeated test cases. It helpsin testing a large test matrix quickly. It helps in enabling the test execution. It encouragesparallel execution. It improves accuracy by eliminating human intervened errors.It helps in saving the overall time and investments.
    Q51). What is the importance of continuous testing inDevOps
    +
    With continuous testing, all changes to the code can betested automatically. It avoids the problem created by the big-bang approach atthe end of the cycle like release delays or quality is sues etc. In this way,continuous testing assures frequent and quality releases.
    Q52). What are the major benefits of continuous testingtools
    +
    The major benefits of continuous testing tools can be givenbelow. Policy analysis Ris k assessment Requirements traceability Test optimization Advancedanalytics Service virtualization
    Q53). Which testing toolis just the best as per yourexperience
    +
    Selenium testing toolis just the best as per my experience.Here are a few benefits which makes it suitable for the workplace. It is an open source free testing toolwith a large userbase and helping communities. It is compatible with multiple browsers andoperating systems.
    It supports multiple programming languages with regulardevelopment and dis tributed testing. Q54). What are the different testing typessupported by the Selenium
    +
    These are the Regression Testing and functionaltesting.
    Q55). What is two-factor authentication in DevOps
    +
    Two-factor authentication in DevOps is a security methodwhere the user is provided with two identification methods from differentcategories.
    Q56). Which type of testing should be performed to make surethat a new service is ready for production
    +
    It is continuous testing that makes sure that a new serviceis ready for production.
    Q57). What is Puppet
    +
    It is a configuration management toolin DevOps that helpsyou in automating adminis tration tasks.
    Q58). What do you understand by the term CanaryRelease
    +
    It is a pattern that reduces the ris k of introducing a newversion of the software into the production environment. It is made available ina controlled manner to the subset of users before releasing to the complete setof users.
    Q59). What is the objective of using PTR in DNS
    +
    PTR means pointer record that is required for a reverse DNSlookup.
    Q60). What is Vagrant in DevOps
    +
    It is a DevOps toolthat is used for creating and managingvirtual environments for testing and developing software programs. DevOps Job Interview Questions and Answers
    Q61). What are the prerequis ites for the successfulimplementation of DevOps
    +
    Here are the prerequis ites for the successful implementationof DevOps: One Version controlsystem Automated testing Automateddeployment Proper communication among team members
    Q62). What are the best practices to follow for DevOpssuccess
    +
    Here are the essential practices to follow for DevOpssuccess: The speed of delivery time taken for a task to get them intothe production environment. Focus on different types of defects in thebuild. Check the average time taken to recover in case offailure.
    The total number of reported bugs by customers impacting thequality of an application. Q63). What is a SubGit tool
    +
    A SubGit toolhelps in migrating from SVN to Git. It allowsyou to build a writable Git mirror of a remote or local subversionrepository. Q64). Name a few networks migrating tools. Splunk Icinga 2 Wireshark NagiosOpenNMS
    Q65). How to check either your video card can run Unity ornot
    +
    Here is the command to check either your video card can rununity or not: /usr/lib/linux/unity_support_test-p It will give you a depth of unity’s requirements. Ifthey are met, your video card can run Unity.
    Q66). How to enable the start-up sounds in ubuntu
    +
    To enable the start-up sounds in Ubuntu, you should followthese steps: Click controlgear then click on startupapplications. In the “startup application preferences” window,click “Add” to add a new entry. Add the following command in the comment boxes:/usr/bin/Canberra-gtk-play-id= “desktop-login” – description=“play login sound” Now, log out from the account once you are done.
    Q67). What is the quickest way of opening an Ubuntu terminalin a particular directory
    +
    For this purpose, you can use the custom keywordshortcuts. To do that, in the command field of a new custom keyboard,type genome –terminal –working –directory = /path/to/dir.
    Q68). How to get the current color of the screen on theUbuntu desktop
    +
    You should open the background image and use a dropper toolto select the color at a specific point. It will give you the RGB value for thatcolor at a specific point.
    Q69). How to create launchers on a Ubuntu Desktop
    +
    To create a launcher on a Ubuntu desktop, you should use thefollowing command: ALT+F2 then type“gnome-desktop-item-edit-create-new~/desktop,” it will launch theold GUI dialog and create a launcher on your desktop
    Q70). What is Memcached in DevOps
    +
    It is an open source, high speed, dis tributed memory object.Its primary objective is enhancing the response time of data that can otherwis ebe constructed or recovered from another source of database. It avoids the needfor operating SQL database repetitively to fetch data for a concurrentrequest. DevOps quiz
    Q71). Why Memcached in useful
    +
    It speeds up the application processes. It determines whatto store and share. It reduces the total number of retrieval requests to thedatabase. It cuts the I/O access from the hard dis k.
    Q72). What are the drawbacks of Memcached
    +
    It is not a persis tent data store It is not adatabase. It is not application-specific. It is not able to cache large objects.
    Q73). What are the features of Memcached
    +
    A few highlighted features of Memcached can be givenas: CAS Tokens that are used to store the updated objects.Callbacks to simplify the code. GetDelayed to reduce the response or wait time for theoutcome. A binary protocolto use with the new client. Igbinary data option is available to use with the complexdata.
    Q74). Can you share a single instance of Memcached withmultiple instances
    +
    Read: Top 20 Git Interview Questions and Answers 2018 Yes,it is possible.
    Q75). If you have multiple Memcached servers and one of theMemcached servers gets failed, then what will happen
    +
    Even if one of the Memcached servers gets failed, datawon’t get lost, but it can be recovered by configuring it for multiplenodes.
    Q76). How to minimize the Memcached server outages
    +
    If one of the server instances get failed, it will put ahuge load on the database server. To avoid this , the code should be written insuch a way that it can minimize the cache stampedes and leave a minimal impacton the database server. You can bring up an instance of Memcached on a new machinewith the help of lost IP addresses. You can modify the Memcached server lis t tominimize the server outages. Set up the timeout value for Memcached server outages. Ifthe server gets down, it will try to send a request to the client until thetimeout value is achieved.
    Q77). How to update Memcached when data changes
    +
    To update the Memcached in case of data changes, you can usethese two techniques: Clear the cache proactively Reset the Cache
    Q78). What is a Dogpile effect and how to prevent it
    +
    Dogpile effect refers to the event when the cache expires,and website hits by multiple requests together at the same time. The semaphorelock can minimize this effect. When the cache expires, the first processacquires the lock and generates new value as required.
    Q79). Explain when Memcached should not be used
    +
    It should not be used as a datastore but a cacheonly. It should not be taken the only source of information to runyour apps, but the data should be available through other sources too. It is just a value store or a key and cannot perform a queryor iterate over contents to extract the information. It does not offer anysecurity for authentication or encryption.
    Q80). What is the significance of the blue/green color indeployment pattern
    +
    These two colors are used to represent tough deploymentchallenges for a software project. The live environment is the Blue environment.When the team prepares the next release of the software, it conducts the finalstage of testing in the Green environment.
    Q81). What is a Container
    +
    Containers are lightweight virtualizations that offeris olation among processes.
    Q82). What is post mortem meeting in DevOps
    +
    A post mortem meeting dis cusses what went wrong and whatsteps to be taken to avoid failures. Q83). Name two tools that can be used for Docketnetworking. These are Docker Swarm and Kubernetes.
    Q84). How to build a small cloud quickly
    +
    Dokku can be a good option to build a small cloudquickly.
    Q85). Name a few common areas where DevOps is implemented
    +
    These are IT, production, operations, marketing, softwaredevelopment, etc.
    Q86). What is pair programming in DevOps
    +
    It is a development practice of extreme programmingrules.
    Q87). What is CBD in DevOps
    +
    CBD or component-based development is a unique style ofapproaching product development.
    Q88). What is Resilience Test in DevOps
    +
    It ensures the full recovery of data in case offailure. Q89). Name a few important DevOps KPis . Three most important KPis of DevOps can be given as: Meantime to failure recovery Percentage of faileddeployments Deployment Frequency
    Q90). What is the difference between asset and configurationmanagement
    +
    Asset management refers to any system that monitors andmaintains things of a group or unit. Configuration Management is the process ofidentifying, controlling, and managing configuration items in support of changemanagement.
    Q91). How does HTTP work
    +
    An HTTP protocolworks like any other protocolin aclient-server architecture. The client initiates a request, and the serverresponds to it.
    Q92). What is Chef
    +
    It is a powerful automated toolfor transforminginfrastructure into code.
    Q93). How will you define a resource in Chef
    +
    A resource is a piece of infrastructure and its desiresstate like packages should be installed, services should be in running state,the file could be generated, etc.
    Q94). How will you define a recipe in Chef
    +
    A recipe is a collection of resources describing aparticular configuration or policy.
    Q95). How is cookbook different from the recipe inChef
    +
    The answer is pretty direct. A recipe is a collection ofresources, and a Cookbook is a collection of recipes and otherinformation.
    Q96). What is an Ansible Module
    +
    Modules are considered as a unit of work in Ansible. Eachmodule is standalone, and it can be written in common scriptinglanguages.
    Q97). What are playbooks in Ansible
    +
    Playbooks are Ansible’s orchestration, configuration,and deployment languages. They are written in human-readable basic textlanguage.
    Q98). How can you check the complete lis t of Ansiblevariables
    +
    You can use this command to check the complete lis t ofAnsible variables. Ansible –m setup hostname
    Q99). What is Nagios
    +
    It is a DevOps toolfor continuous monitoring of systems,business processes, or application services, etc.
    Q100). What are plugins in DevOps
    +
    Plugins are scripts that are run from a command line tocheck the status of Host or Service.
    Question:What Are Benefits OfDevOps
    +
    DevOps is gaining more popularity day by day. Here are somebenefits of implementing DevOps Practice. Release Velocity:DevOps enable organizations to achieve a great release velocity. We canrelease code to production more often and without any hectic problems. Development Cycle:DevOps shortens the development cycle from initial design toproduction. Full Automation:DevOps helps to achieve full automation from testing, to build, releaseand deployment. Deployment Rollback:In DevOps, we plan for any failure in deployment rollback due to a bugin code or is sue in production. This gives confidence in releasing featurewithout worrying about downtime for rollback. Defect Detection:With DevOps approach, we can catch defects much earlier than releasingto production. It improves the quality of the software. Collaboration:WithDevOps, collaboration between development and operations professionalsincreases. Performance-oriented:With DevOps, organization follows performance-oriented culturein Feb-71 which teams become more productive and moreinnovative.
    Question: What is The Typical DevOps workflow
    +
    The typical DevOps workflow is as follows: Atlassian Jira for writingrequirements and tracking tasks. Based on the Jira tasks,developers checking code into GIT version controlsystem. The code checked into GIT is built by using Apache Maven. The build process is automatedwith Jenkins. During the build process,automated tests run to validate the code checked in by adeveloper. Code built on Jenkins is sentto organization’s Artifactory. Jenkins automatically picksthe libraries from Artifactory and deploys it to Production. During Production deployment,Docker images are used to deploy same code on multiplehosts. Once a code is deployed toProduction, we use monitoring tools like ngios are used tocheck the health of production servers. Splunk based alerts inform theadmins of any is sues or exceptions in production.
    Question: DevOps Vs Agile
    +
    Agile is a set of values and principles about how to developsoftware in a systematic way. Where as DevOPs is a way to quickly, easily and repeatablymove that software into production infrastructure, in a safe and simpleway. In oder to achieve that we use a set of DevOps tools andtechniques.
    Question: What is The Most ImportantThing DevOps Helps Us To Achieve
    +
    Most important aspect of DevOps is to get the changes intoproduction as quickly as possible while minimizing ris ks in software qualityassurance and compliance. This is the primary objective of DevOps. Question: What Are Some DevOps tools. Here is a lis t of some most important DevOps tools Git Jenkins, Bamboo Selenium Mar-71 Puppet, BitBucket Chef Ansible, Artifactory Nagios Docker Monit ELK –Elasticsearch,Logstash, Kibana Collectd/Collect
    Question:How To DeploySoftware
    +
    Code is deployed by adopting continuous delivery bestpractices. Which means that checked in code is built automatically and thenartifacts are publis hed to repository servers. On the application severs there are deployment triggersusually timed by using cron jobs. All the artifacts are then downloaded anddeployed automatically. Gradle DevOps Interview Questions
    Question:What is Gradle
    +
    Apr-71 Gradle is an open-source build automation system that buildsupon the concepts of Apache Ant and Apache Maven. Gradle has a properprogramming language instead of XML configuration file and the language is called ‘Groovy’. Gradle uses a directed acyclic graph("DAG") to determine the order in which tasks can be run. Gradle was designed for multi-project builds, which can growto be quite large. It supports incremental builds by intelligently determiningwhich parts of the build tree are up to date, any task dependent only on thoseparts does not need to be re-executed.
    Question: What Are Advantages of Gradle
    +
    Gradle provides many advantages and here is a lis t Declarative Builds:Probably one of the biggest advantage of Gradleis Groovy language. Gradle provides declarative languageelements. Which providea build-by- convention support for Java, Groovy, Web andScala. Structured Build:Gradle allows developers to apply common designprinciples to their build. It provides a perfect structurefor build, so that well-structured and easily maintained, comprehensible buildstructures can be built. Deep API:Using this API, developers can monitor andcustomize its configuration and executionbehaviors. Scalability:Gradle can easily increase productivity, fromsimple and single project builds to huge enterpris emulti-project builds.Multi-project builds:Gradle supports multi-project buildsand also partial builds. Build management:Gradle supports different strategies to manage project dependencies. First buildintegration tool − Gradle completelysupports ANT tasks, Maven and lvy repositoryinfrastructure for publis hing and retrieving dependencies. It also provides aconverter for turning a Maven pom.xml to Gradle script. Ease of migration:Gradle can easily adapt to any projectstructure. Gradle Wrapper:Gradle Wrapper allows developers to executeGradle builds on machines where Gradle is not installed.This is useful for continuous integration of servers. Free open source− Gradle is an open source project, andlicensed under the Apache Software License (ASL). Groovy:Gradle's build scripts are written inGroovy, not XML. But unlike other approaches this is notfor simply exposing the raw scripting power of a dynamic language. The wholedesign of Gradle is oriented towards being used as a language, not as a rigidframework.
    Question: Why Gradle is Preferred Over Maven or Ant
    +
    May-71 There is n't a great support for multi-project builds inAnt and Maven. Developers end up doing a lot of coding to support multi-projectbuilds. Also having some build-by-convention is nice and makes buildscripts more concis e. With Maven, it takes build by convention too far, andcustomizing your build process becomes a hack. Maven also promotes every project publis hing an artifact.Maven does not support subprojects to be built and versioned together. But with Gradle developers can have the flexibility of Antand build by convention of Maven. Groovy is easier and clean to code than XML. In Gradle,developers can define dependencies between projects on the local file systemwithout the need to publis h artifacts to repository. Question: Gradle Vs Maven The following is a summary of the major differences betweenGradle and Apache Maven: Flexibility:Googlechose Gradle as the official build toolfor Android; not because build scriptsare code, but because Gradle is modeled in a way that is extensible in the mostfundamental ways. Both Gradle and Maven provide convention over configuration.However, Maven provides a very rigid model that makes customization tedious andsometimes impossible. While this can make it easier to understand any given Mavenbuild, it also makes it unsuitable for many automation problems. Gradle, on theother hand, is built with an empowered and responsible user in mind. Performance Both Gradle and Maven employ some form of parallel projectbuilding and parallel dependency resolution. The biggest differences areGradle's mechanis ms for work avoidance and incrementally. Followingfeatures make Gradle much faster than Maven: Incrementally:Gradle avoids work bytracking input and output of tasks and only running whatis necessary. BuildCache:Reuses the build outputs of any otherGradle build with the same inputs. GradleDaemon:A long-lived process that keeps buildinformation "hot" in memory. User Experience Maven's has a very good support for various IDE's.Gradle's IDE support continues to improve quickly but is not great as ofMaven. Jun-71 Although IDEs are important, a large number of users preferto execute build operations through a command-line interface. Gradle provides amodern CLI that has dis coverability features like `gradle tasks`, as well asimproved logging and command-line completion. Dependency Management Both build systems provide built-in capability to resolvedependencies from configurable repositories. Both are able to cache dependencieslocally and download them in parallel. As a library consumer, Maven allows one to override adependency, but only by version. Gradle provides customizable dependencyselection and substitution rules that can be declared once and handle unwanteddependencies project-wide. This substitution mechanis m enables Gradle to buildmultiple source projects together to create composite builds. Maven has few, built-in dependency scopes, which forcesawkward module architectures in common scenarios like using test fixtures orcode generation. There is no separation between unit and integration tests, forexample. Gradle allows custom dependency scopes, which providesbetter-modeled and faster builds.
    Question: What are Gradle Build Scripts
    +
    Gradle builds a script file for handling projects and tasks.Every Gradle build represents one or more projects. A project represents a library JAR or a webapplication.
    Question: What is Gradle Wrapper
    +
    The wrapper is a batch script on Windows, and a shell scriptfor other operating systems. Gradle Wrapper is the preferred way of starting aGradle build. When a Gradle build is started via the wrapper, Gradle willautomatically download and run the build.
    Question: What is Gradle Build Script File Name
    +
    This type of name is written in the format that is build.gradle. It generally configures the Gradle scripting language.
    Question: How To Add Dependencies In Gradle
    +
    In order to make sure that dependency for your project is added, you need to mention the Jul-71 configuration dependency like compiling the blockdependencies of the build.gradle file.
    Question: What is Dependency Configuration
    +
    Dependency configuration compris es of the externaldependency, which you need to install well and make sure the downloading is donefrom the web. There are some key features of this configuration whichare: Compilation:The projectwhich you would be starting and working on the first needs to be wellcompiled and ensure that it is maintained in the good condition. Runtime:It is the desiredtime which is required to get the work dependency in the form ofcollection. Test Compile:Thedependencies check source requires the collection to be made for running theproject. Test runtime:This is thefinal process which needs the checking to be done for running the test thatis in a default manner considered to be the mode of runtime
    Question: What is Gradle Daemon
    +
    A daemon is a computer program that runs as a backgroundprocess, rather than being under the direct controlof an interactiveuser. Gradle runs on the Java Virtual Machine (JVM) and usesseveral supporting libraries that require a non-trivial initializationtime. As a result, it can sometimes seem a little slow to start.The solution to this problem is the Gradle Daemon : a long-lived background processthat executes your builds much more quickly than would otherwis e be thecase. We accomplis h this by avoiding the expensive bootstrappingprocess as well as leveraging caching, by keeping data about your project inmemory. Running Gradle builds with the Daemon is no different thanwithout
    Question: What is Dependency Management in Gradle
    +
    Software projects rarely work in is olation. In most cases, aproject relies on reusable functionality in the form of libraries or is brokenup into individual components to compose a modularized system. Dependency management is a technique for declaring,resolving and using dependencies required by the project in an automatedfashion. Aug-71 Gradle has built-in support for dependency management andlives up the task of fulfilling typical scenarios encountered in modern softwareprojects. Question: What Are Benefits Of Daemon in Gradle 3.0 Here are some of the benefits of Gradle daemon It has good UX It is very powerful It is aware of the resource It is well integrated with the Gradle Build scans It has been default enabled
    Question: What is Gradle Multi-Project Build
    +
    Multi-project builds helps with modularization. It allows aperson to concentrate on one area of work in a larger project, while Gradletakes care of dependencies from other parts of the project A multi-project build in Gradle consis ts of one rootproject, and one or more subprojects that may also have subprojects. While each subproject could configure itself in completeis olation of the other subprojects, it is common that subprojects share commontraits. It is then usually preferable to share configurations amongprojects, so the same configuration affects several subprojects.
    Question: What is Gradle Build Task
    +
    Gradle Build Tasks is made up of one or more projects and aproject represents what is been done with Gradle. Some key of features of Gradle Build Tasks are: Task has life cycled methods [do first, do last] Build Scripts are code Default tasks like run, clean etc Task dependencies can be defined using properties likedependsOn
    Question: What is Gradle Build Life Cycle
    +
    Sep-71 Gradle Build life cycle consis ts of following threesteps -Initialization phase:In this phase the project layer or objects are organized -Configuration phase:In this phase all the tasks are available for the current build and adependency graph is created -Execution phase:Inthis phase tasks are executed.
    Question: What is Gradle Java Plugin
    +
    The Java plugin adds Java compilation along with testing andbundling capabilities to the project. It is introduced in the way of a SourceSetwhich act as a group of source files complied and executed together.
    Question: What is Dependency Configuration
    +
    A set of dependencies is termed as dependency configuration,which contains some external dependencies for download and installation. Here are some key features of dependency configurationare: Compile: The project must be able to compile together Runtime: It is the required time needed to get the dependency work inthe collection. Test Compile: The check source of the dependencies is to be collected inorder to run the project. Test Runtime: The final procedure is to check and run the test which is bydefault act as a runtime mode. Groovy DevOps Interview Questions Oct-71
    Question: What is Groovy
    +
    Apache Groovy is a object-oriented programming language forthe Java platform. It is both a static and dynamic language with featuressimilar to those of Python, Ruby, Perl, and Smalltalk. It can be used as both a programming language and ascripting language for the Java Platform, is compiled to Java virtual machine(JVM) bytecode, and interoperates seamlessly with other Java code andlibraries. Groovy uses a curly-bracket syntax similar to Java. Groovysupports closures, multiline strings, and expressions embedded instrings. And much of Groovy's power lies in itsASTtransformations, triggered through annotations.
    Question: Why Groovy is Gaining Popularity
    +
    Here are few reasons for popularity of Groovy Familiar OOP languagesyntax. Extensive stock of variousJava libraries Nov-71 Increased expressivity (typeless to do more) Dynamic typing (lets you codemore quickly, at least initially) Closures Native associativearray/key-value mapping support (you can create an associative array literal) String interpolation (cleanercreation of strings dis playing values) Regex's being first classcitizens Question: What is Meant By Thin Documentation InGroovy Groovy is documented very badly. In fact the coredocumentation of Groovy is limitedand there is no information regarding thecomplex and run-time errors that happen. Developers are largely on there own and they normally haveto figure out the explanations about internal workings by themselves.
    Question: How To Run Shell Commands in Groovy
    +
    Groovy adds the execute method to String to makeexecuting shells fairly easy println "ls".execute().text
    Question: In How Many Platforms you can use Groovy
    +
    These are the infrastructure components where we can usegroovy: -Application Servers -Servlet Containers -Databases with JDBC drivers -All other Java-based platforms
    Question: Can Groovy Integrate With Non Java BasedLanguages
    +
    image It is possible but in this case the features are limited.Groovy cannot be made to handle all the tasks in a manner it has to.
    Question: What are Pre-Requirements For Groovy
    +
    Dec-71 image Installing and using Groovy is easy. Groovy does not havecomplex system requirements. It is OS independent. Groovy can perform optimally in every situation.There aremany Java based components in Groovy,which make it even more easier to work withJava applications.
    Questions: What is Closure In Groovy
    +
    A closure in Groovy is an open, anonymous, block of codethat can take arguments, return a value and be assigned to a variable. A closuremay reference variables declared in its surrounding scope. In opposition to theformal definition of a closure, Closure in the Groovy language can also contain free variables which aredefined outside of its surrounding scope. A closure definition follows this syntax: { [closureParameters -> ] statements } Where [closureParameters->] is an optional comma-delimited lis t of parameters, andstatements are 0 or more Groovy statements. The parameters look similar to amethod parameter lis t, and these parameters may be typed or untyped. When a parameter lis t is specified, the -> character is required and serves toseparate the arguments from the closure body. The statements portion consis ts of 0, 1, ormany Groovy statements.
    Question: What is ExpandoMeta Class In Groovy
    +
    Through this class programmers can add properties,constructors, methods and operations in the task. It is a powerful optionavailable in the Groovy. By default this class cannot be inherited and users need tocall explicitly. The command for this is “ExpandoMetaClass.enableGlobally()”.
    Question: What Are Limitations Of Groovy
    +
    Groovy has some limitations. They are described below It can be slower than theother object-oriented programming languages. It might need memory more thanthat required by other languages. The start-up time of groovyrequires improvement. It is not that frequent. For using groovy, you need tohave enough knowledge of Java. Knowledge of Java is important because half of groovy is based on Java. 13/71 It might take you some time toget used to the usual syntax and default typing. It consis ts of thindocumentation. Question: How To Write HelloWorld Program In Groovy The following is a basic Hello World program written inGroovy: class Test { static void main(String[] args) { println('Hello World'); } }
    Question: How To Declare String In Groovy
    +
    In Groovy, the following steps are needed to declare astring. The string is closed withsingle and double qotes. It contains Groovy Expressionsnoted in ${} Square bracket syntax may beapplied like charAt(i)
    Question: Differences Between Java And Groovy
    +
    Groovy tries to be as natural as possible for Javadevelopers. Here are all the major differences between Java and Groovy. -Default imports In Groovy all these packages and classes are imported bydefault, i.e. Developers do not have to use an explicit import statement to use them: java.io.* java.lang.* java.math.BigDecimal java.math.BigInteger java.net.* java.util.* groovy.lang.* groovy.util.* -Multi-methods 14/71 In Groovy, the methods which will be invoked are chosen atruntime. This is called runtime dis patch or multi-methods. It means that themethod will be chosen based on the types of the arguments at runtime. In Java,this is the opposite: methods are chosen at compile time, based on the declaredtypes. -Array initializers In Groovy, the { … } block is reserved for closures. That means that you cannotcreate array literals with this syntax: int[] arraySyntex = { 6, 3, 1} You actually have to use: int[] arraySyntex = [1,2,3] -ARM blocks ARM (Automatic Resource Management) block from Java 7 arenot supported in Groovy. Instead, Groovy provides various methods relying onclosures, which have the same effect while being more idiomatic. -GStrings As double-quoted string literals are interpreted as GString values, Groovy may fail withcompile error or produce subtly different code if a class with String literal containing a dollar character is compiled with Groovy and Java compiler. While typically, Groovy will auto-cast between GString and String if an API declares the type of a parameter, beware of JavaAPis that accept an Object parameterand then check the actual type. -String and Character literals Singly-quoted literals in Groovy are used for String , and double-quoted result in String or GString , depending whether there is interpolation in the literal. image image assert 'c'.getClass()==String assert"c".getClass()==String assert "c${1}".getClass() in GString Groovy will automatically cast a single-character String to char only when assigning to a variable of type char . When calling methods with arguments oftype char we need to either castexplicitly or make sure the value has been cast in advance. char a='a' assert Character.digit(a, 16)==10 : 'But Groovy doesboxing' assert Character.digit((char) 'a', 16)==10 try { assert Character.digit('a', 16)==10 assert false:'Need explicit cast' 15/71 } catch(Mis singMethodException e) { } Groovy supports two styles of casting and in the case ofcasting to char there are subtledifferences when casting a multi-char strings. The Groovy style cast is morelenient and will take the first character, while the C-style cast will fail withexception. // for single char strings, both arethe same assert ((char) "c").class==Character assert ("c" as char).class==Character // for multi char strings they arenot try { ((char) 'cx') == 'c' assert false: 'will fail - not castable' } catch(GroovyCastException e) { } assert ('cx' as char) == 'c' assert'cx'.asType(char) == 'c' -Behaviour of In Java == meansequality of primitive types or identity for objects. In Groovy == translates to a.compareTo(b)==0 , if they are Comparable , and a.equals(b) otherwis e. To check for identity, there is is . E.g. a.is (b) .image
    Question: How To Test Groovy Application
    +
    The Groovy programming language comes with great support forwriting tests. In addition to the language features and test integration withstate-of-the-art testing libraries and frameworks. The Groovy ecosystem has born a rich set of testinglibraries and frameworks. Groovy Provides following testing capabilities Junit Integrations Spock for specifications Geb for Functional Test Groovy also has excellent built-in support for a range ofmocking and stubbing alternatives. When using Java, dynamic mocking frameworksare very popular. A key reason for this is that it is hard work creatingcustom hand-crafted mocks using Java. Such frameworks can be used easily with Groovy.
    Question: What Are Power Assertions In Groovy
    +
    16/71 Writing tests means formulating assumptions by usingassertions. In Java this can be done by using the assert keyword. But Groovy comes with a powerful variant of assert also known as power assertion statement . Groovy’s power assert differs from the Java version in its output given the booleanexpression validates to false : def x = 1 assert x == 2 // Output: // // Assertion failed: // assert x == 2 // | | // 1 false This section shows the std-err output The java.lang.AssertionError that is thrown whenever the assertion can not be validatedsuccessfully, contains an extended version of the original exception message.The power assertion output shows evaluation results from the outer to the innerexpression. The power assertion statements true power unleashes in complexBoolean statements, or statements with collections or other toString -enabled classes: def x = [1,2,3,4,5] assert (x << 6)==[6,7,8,9,10] // // // Output: Assertion failed: // assert (x << 6)==[6,7,8,9,10] // | | | // | | false // | [1, 2, 3, 4, 5, 6] // [1, 2, 3, 4, 5, 6]
    Question: Can We Use Design Patterns In Groovy
    +
    Design patterns can also be used with Groovy. Here areimportant points Some patterns carry overdirectly (and can make use of normal Groovy syntax improvements for greater readability) Some patterns are no longerrequired because they are built right into the language orbecause Groovy supports a better way of achieving the intent of thepattern some patterns that have to beexpressed at the design level in other languages can be implemented directly inGroovy (due to the way Groovy can blur the dis tinction between design andimplementation)
    Question: How To Parse And Produce JSON Object InGroovy
    +
    17/71 Groovy comes with integrated support for converting betweenGroovy objects and JSON. The classes dedicated to JSON serialis ation and parsingare found in the groovy.json a class that parses JSON text or reader content into Groovydata structures (objects) such as maps, lis ts and primitive types like Integer , Double , Boolean and String . The class comes with a bunch of overloaded parse methods plus some special methods such as parseText , parseFile and others
    Question: What is Difference Between XmlParser AndXmlSluper
    +
    XmlParser and XmlSluper are used for parsing XML withGroovy. Both have the same approach to parse an xml. Both come with a bunch of overloaded parse methods plus somespecial methods such as parseText ,parseFile and others. XmlSlurper def text = ''' Groovy ''' def lis t = new XmlSlurper().parseText(text) assert lis t instanceofgroovy.util.slurpersupport.GPathResult assert lis t.technology.name =='Groovy' Parsing the XML an returning the root node as aGPathResult Checking we’re using a GPathResult Traversing the tree in a GPath style XmlParser 18/71 def text = ''' Groovy ''' def lis t = new XmlParser().parseText(text) assert lis t instanceof groovy.util.Node assertlis t.technology.name.text() == 'Groovy' Parsing the XML an returning the root node as a Node Checking we’re using a Node Traversing the tree in a GPath style Let’s see the similarities betweenXMLParser andXMLSlurperfirst: Both are based on SAX so they both are low memory footprint image Both canupdate/transform the XML But they have key differences: XmlSlurper evaluates the structurelazily. So if you update the xml you’ll have to evaluate the whole treeagain. XmlSlurper returns GPathResult instances when parsing XML XmlParser returns Node objects when parsing XML
    When to use one or the another
    +
    If you want to transform anexis ting document to another then be the choice If you want to update and readat the same time then XmlParser is the choice. Maven DevOps Interview Questions 19/71 image
    Question: What is Maven
    +
    Mavenis a buildautomation toolused primarily for Java projects. Maven addresses twoaspects of building software: First:It describeshow software is built Second:It describesits dependencies. Unlike earlier tools like Apache Ant, it uses conventionsfor the build procedure, and only exceptions need to be written down. An XML file describes the software project being built, itsdependencies on other external modules and components, the build order,directories, and required plug-ins. It comes with pre-defined targets for performing certainwell-defined tasks such as compilation of code and its packaging. Maven dynamically downloads Java libraries and Mavenplug-ins from one or more repositories such as the Maven 2 Central Repository,and stores them in a local cache. This local cache of downloaded artifacts can also be updatedwith artifacts created by local projects. Public repositories can also beupdated. 20/71
    Question: What Are Benefits Of Maven
    +
    One of the biggest benefit ofMaven is that its design regards all projects as having acertain structure and a set of supported task work-flows. Maven has quick project setup,no complicated build.xml files, just a POM and go All developers in aproject use the same jar dependencies due to centralized POM. In Maven getting a numberof reports and metrics for a project "for free" It reduces the size of sourcedis tributions, because jars can be pulled from a centrallocation Maven lets developers get yourpackage dependencies easily With Maven there is no need toadd jar files manually to the class path
    Question: What Are Build Life cycles In Maven
    +
    Build lifecycle is a lis t of named phases that can be usedto give order to goal execution. One of Maven's standard life cycles is the default lifecycle , whichincludes the following phases, in this order validate generate-sources process-sources generate-resources process-resources compile process-test-sources process-test-resources test-compile test package install deploy
    Question: What is Meant By Build Tool
    +
    Build tools are programs that automate the creation ofexecutable applications from source code. Building incorporates compiling,linking and packaging the code into a usable or executable form. In small projects, developers will often manually invoke thebuild process. This is not practical for larger projects. Where it is very hard to keep track of what needs to bebuilt, in what sequence and what dependencies there are in the building process.Using an automation toollike Maven, Gradle or ANT allows the build process tobe more consis tent. 21/71
    Question: What is Dependency Management Mechanis m InGradle
    +
    image Maven's dependency-handling mechanis m is organizedaround a coordinate system identifying individual artifacts such as softwarelibraries or modules. For example if a project needs Hibernate library. Ithas to simply declare Hibernate's project coordinates in its POM. Maven will automatically download the dependency and thedependencies that Hibernate itself needs and store them in the user's localrepository. Maven 2 Central Repository is used by default tosearch for libraries, but developers can configure the custom repositories to beused (e.g., company-private repositories) within the POM.
    Question: What is Central Repository Search Engine
    +
    The Central Repository Search Engine, can be used to findout coordinates for different open-source libraries and frameworks.
    Question: What are Plugins In Maven
    +
    Most of Maven's functionality is in plugins. Aplugin provides a set of goals that can be executed using the followingsyntax: mvn [plugin-name]:[goal-name] For example, a Java project can be compiled with thecompiler-plugin's compile-goal by running mvncompiler:compile . There are Maven plugins for building,testing, source control management, running a web server, generating Eclipseproject files, and much more. image Plugins are introduced and configured in a -section of a pom.xml file. Some basic plugins are included in every project by default, andthey have sensible default settings.
    Questions: What is Difference Between Maven And ANT
    +
    Ant Maven Ant is a toolbox. Maven is a framework. There is no life cycle. There is life cycle. 22/71 Ant doesn't have formal Maven has a convention to place source code,compiled code conventions. etc. Ant is procedural. Maven is declarative. The ant scripts are not reusable. The maven plugins are reusable.
    Question: What is POM In Maven
    +
    A Project Object Model (POM) provides all the configurationfor a single project. General configuration covers the project's name, itsowner and its dependencies on other projects. One can also configure individual phases of the buildprocess, which are implemented as plugins. For example, one can configure the compiler-plugin to useJava version 1.5 for compilation, or specify packaging the project even if someunit tests fail. Larger projects should be divided into several modules, orsub-projects, each with its own POM. One can then write a root POM through whichone can compile all the modules with a single command. POMs can also inheritconfiguration from other POMs. All POMs inherit from the Super POM by default.The Super POM provides default configuration, such as default sourcedirectories, default plugins, and so on.
    Question: What is Maven Archetype
    +
    Archetype is a Maven project templating toolkit. Anarchetype is defined as an original pattern or model from which all other thingsof the same kind are made.
    Question: What is Maven Artifact
    +
    In Maven artifact is simply a file or JAR that is deployedto a Maven repository. An artifact has -Group ID -Artifact ID -Version string. The three together uniquely identify theartifact. All the project dependencies are specified as artifacts.
    Question: What is Goal In Maven
    +
    In Maven a goal represents a specific task which contributesto the building and managing 23/71 of a project. It may be bound to 1 or many build phases. A goal not boundto any build phase could be executed outside of the build lifecycle by itsdirect invocation.
    Question: What is Build Profile
    +
    In Maven a build profile is a set of configurations. This set is used to define or override default behaviour of Maven build. Build profile helps the developers to customize the buildprocess for different environments. For example you can set profiles for Test,UAT, Pre-prod and Prod environments each with its own configurations etc.
    Question: What Are Build Phases In Maven
    +
    There are 6 build phases. -Validate -Compile -Test -Package-Install -Deploy
    Question: What is Target, Source & Test Folders InMavn
    +
    Target:folder holdsthe compiled unit of code as part of the build process. Source:folder usually holds javasource codes.Test: directory contains all the unit testing codes.
    Question: What is Difference Between Compile &Install
    +
    Compile:is used tocompile the source code of the project Install: installs the package into the local repository,for use as a dependency in other projects locally.Design patterns can also beused with Groovy. Here are important points
    Question: How To Activate Maven Build Profile
    +
    A Maven Build Profile can be activated in followingways Using command line consoleinput. By using Mavensettings. Based on environment variables(User/System variables). Linux DevOps Interview Questions 24/71 image
    Question: What is Linux
    +
    Linux is the best-known and most-used open sourceoperating system. As an operating system, Linux is a software that sitsunderneath all of the other software on a computer, receiving requests from those programs and relaying theserequests to the computer’s hardware. In many ways, Linux is similar to other operating systemssuch as Windows, OS X, or iOS But Linux also is different from other operating systems inmany important ways. First, and perhaps most importantly, Linux is open sourcesoftware. The code used to create Linux is free and available to the public toview, edit, and—for users with the appropriate skills—to contributeto. Linux operating system is consis t of 3 components which areas below: Kernel:Linux is a monolithic kernel that is free andopen source software that is responsible for managinghardware resources for the users. System Library:System Library plays a vital role becauseapplication programs access Kernels feature using systemlibrary. System Utility:System Utility performs specific and individuallevel tasks. 25/71
    Question: What is Difference Between Linux &Unix
    +
    Unix and Linux are similar in many ways, and in fact, Linuxwas originally created to be similar to Unix. Both have similar tools for interfacing with the systems,programming tools, filesystem layouts, and other key components. However, Unix is not free. Over the years, a number ofdifferent operating systems have been created that attempted to be“unix-like” or “unix-compatible,” but Linux has been themost successful, far surpassing its predecessors in popularity.
    Question: What is BASH
    +
    BASH stands forBourne AgainShell. BASH is the UNIX shell for the GNUoperating system. So, BASH is the command language interpreter that helps you toenter your input, and thus you can retrieve information. In a straightforward language, BASH is a program that willunderstand the data entered by the user and execute the command and givesoutput.
    Question: What is CronTab
    +
    The crontab (short for "cron table") is a lis t ofcommands that are scheduled to run at regular time intervals on computer system.Thecrontabcommandopens the crontab for editing, and lets you add, remove, or modify scheduledtasks. The daemon which reads the crontab and executes the commandsat the right time is called cron. It's named after Kronos, the Greekgod of time. Command syntax crontab [-u user ] file crontab [-u user ] [-l | -r | -e] [-i] [-s]
    Question: What is Daemon In Linux
    +
    Adaemonis a type of program on Linux operating systems that runs unobtrusivelyin the background, rather than under the direct controlof a user, waiting to beactivated by the occurrence of a specific event or condition 26/71 Unix-like systems typically run numerous daemons, mainly toaccommodate requests for services from other computers on a network, but also torespond to other programs and to hardware activity. Examples of actions or conditions that can trigger daemonsinto activity are a specific time or date, passage of a specified time interval,a file landing in a particular directory, receipt of an e-mail or a Web requestmade through a particular communication line. It is not necessary that the perpetrator of the action orcondition be aware that a daemon is lis tening , although programs frequentlywill perform an action only because they are aware that they will implicitlyarouse a daemon.
    Question: What is Process In Linux
    +
    Daemons are usually instantiated as processes . A process is an executing (i.e., running)instance of a program. Processes are managed by the kernel (i.e., the core of theoperating system), which assigns each a unique process identification number (PID). There are three basic types of processes inLinux: -Interactive:Interactive processes are run interactively by a user at the command line -Batch:Batchprocesses are submitted from a queue of processes and are not associated withthe command line; they are well suited for performing recurring tasks whensystem usage is otherwis e low. -Daemon:Daemons arerecognized by the system as any processes whose parent process has a PID ofone
    Question: What is CLI In Linux
    +
    CLI (Command Line Interface) is a type of human-computer interface that reliessolely on textual input and output. That is , the entire dis play screen, or the currently activeportion of it, shows only characters (and no images), and input is usuallyperformed entirely with a keyboard.
    Questions: What is Linux Kernel
    +
    A kernel is the lowest level of easily replaceable softwarethat interfaces with the hardware in your computer. It is responsible for interfacing all of your applicationsthat are running in “user mode” down 27/71 to the physical hardware, and allowing processes, known asservers, to get information from each other using inter-process communication(IPC). There are three types of Kernals Microkernel:Amicrokernel takes the approach of only managing what it has to: CPU, memory, andIPC. Pretty much everything else in a computer can be seen as an accessory andcan be handled in user mode. Monolithic Kernel:Monolithic kernels are the opposite of microkernels because theyencompass not only the CPU, memory, and IPC, but they also include things likedevice drivers, file system management, and system server calls Hybrid Kernel:Hybridkernels have the ability to pick and choose what they want to run in user modeand what they want to run in supervis or mode. Because the Linux kernel is monolithic, it has the largest footprint and the most complexity over the othertypes of kernels. This was a design feature which was under quite a bit ofdebate in the early days of Linux and still carries some of the same designflaws that monolithic kernels are inherent to have.
    Question: What is Partial Backup In Linux
    +
    Partial backup refers to selecting only a portion of filehierarchy or a single partition to back up.
    Question: What is Root Account
    +
    The root account a system adminis trator account. It providesyou full access and controlof the system. Admin can create and maintain user accounts, assigndifferent permis sion for each account etc
    Question: What is Difference Between Cron andAnacron
    +
    One of the main difference between cron and anacron jobs is that cron works on the system that are running continuously. While anacron is used for the systems that are not runningcontinuously. Other difference between the two is cron jobs can run everyminute, but anacron jobs can be run only once a day. Any normal user can do the scheduling of cron jobs, but thescheduling of anacron jobs can be done by the superuser only. 28/71 Cron should be used when you need to execute the job at aspecific time as per the given time in cron, but anacron should be used inwhen there is no any restriction for the timing and can be executed at anytime. If we think about which one is ideal for servers or desktops,then cron should be used for servers while anacron should be used fordesktops or laptops.
    Question: What is Linux Loader
    +
    Linux Loader is a boot loader for Linux operating system. Itloads Linux into into the main memory so that it can begin itsoperations.
    Question: What is Swap Space
    +
    Swap space is the amount of physical memory that is allocated for use by Linux to hold some concurrent running programstemporarily. This condition usually occurs when Ram does not have enoughmemory to support all concurrent running programs. This memory management involves the swapping of memory toand from physical storage.
    Question: What Are Linux Dis tributors
    +
    There are around six hundred Linux dis tributors. Let us seesome of the important ones UBuntu: It is a well known Linux Dis tributionwith a lot of pre-installed apps and easy to userepositories libraries. It is very easy to use and works like MAC operatingsystem. Linux Mint: It uses cinnamon and mate desktop. Itworks on windows and should be used by newcomers. Debian: It is the most stable, quicker anduser-friendly Linux Dis tributors. Fedora: It is less stable but provides thelatest version of the software. It has GNOME3 desktopenvironment by default. Red HatEnterpris e: It is to be usedcommercially and to be well tested before release. Itusually provides the stable platform for a long time. Arch Linux: Every package is to be installed by youand is not suitable for the beginners.
    Question: Why Do Developers Use MD5
    +
    MD5 is an encryption method so it is used to encrypt thepasswords before saving.
    Question: What Are File Permis sions In Linux
    +
    29/71 image There are 3 types of permis sions in Linux Read:User can read the file and lis t thedirectory. Write:User can write new files in the directory. Execute:User can access and run the file in adirectory.
    Question: Memory Management In Linux
    +
    It is always required to keep a check on the memory usage inorder to find out whether the user is able to access the server or the resourcesare adequate. There are roughly 5 methods that determine the total memory usedby the Linux. This is explained as below Freecommand : This is the most simple and easy to use thecommand to check memory usage. For example: ‘$ free –m’,the option ‘m’ dis plays all the data in MBs. /proc/meminfo:The next way to determine the memory usage is toread /proc/meminfo file. For example: ‘$ cat/proc/meminfo’ Vmstat : This command basically lays out the memory usagestatis tics. For example: ‘$ vmstat –s’ Topcommand : This command determines the total memory usage aswell as also monitors the RAM usage. Htop : This command also dis plays the memory usage alongwith other details.
    Question: Granting Permis sions In Linux
    +
    System adminis trator or the owner of the file can grantpermis sions using the ‘chmod’ command. Following symbols are usedwhile writing permis sions chmod +x
    Question: What Are Directory Commands In Linux
    +
    Here are few important directory commands in Linux pwd: It is a built-in command which standsfor‘print workingdirectory’. It dis plays the current working location, working path starting with / anddirectory of the user. Basically, it dis plays the full path to the directory youare currently in. is : This command lis t out all thefiles in the directed folder. cd: This stands for ‘changedirectory’. This command is used to change to the 30/71 directory you want to work from the present directory. Wejust need to type cd followed by the directory name to access that particulardirectory. mkdir: This command is used to create anentirely new directory. rmdir: This command is used to remove adirectory from the system.
    Question: What is Shell Script In Linux
    +
    In the simplest terms, a shell script is a file containing aseries of commands. The shell reads this file and carries out the commands asthough they have been entered directly on the command line. The shell is somewhat unique, in that it is both a powerfulcommand line interface to the system and a scripting languageinterpreter. As we will see, most of the things that can be done on thecommand line can be done in scripts, and most of the things that can be done inscripts can be done on the command line. We have covered many shell features, but we have focused onthose features most often used directly on the command line. The shell also provides a set of features usually (but notalways) used when writing programs.
    Question: Which Tools Are Used For Reporting Statis tics InLinux
    +
    Some of the popular and frequently used system resourcegenerating tools available on the Linux platform are vmstat netstat iostat ifstat mpstat. These are used for reporting statis tics from differentsystem components such as virtual memory, network connections and interfaces,CPU, input/output devices and more.
    Question: What is Dstat In Linux
    +
    dstatis a powerful,flexible and versatile toolfor generating Linux system resource statis tics,that is a replacement for all the tools mentioned in above question. 31/71 It comes with extra features, counters and it is highlyextensible, users with Python knowledge can build their own plugins. Features of dstat: Joins information from vmstat, netstat, iostat, ifstat and mpstattools Dis plays statis tics simultaneously Orders counters and highly-extensible Supports summarizing of grouped block/network devices Dis plays interrupts per device Works on accurate timeframes, no timeshifts when a system is stressed Supports colored output, it indicates different units indifferent colors Shows exact units and limits conversion mis takes as much aspossible Supports exporting of CSV output to Gnumeric and Exceldocuments
    Question: Types Of Processes In Linux
    +
    There are fundamentally two types of processes inLinux: Foreground processes(also referred to as interactive processes)– these are initialized and controlled through aterminal session. In other words, there has to be a user connected to the systemto start such processes; they haven’t started automatically as part of thesystem functions/services. Background processes(also referred to as non-interactive/automaticprocesses) – are processes not connected to aterminal; they don’t expect any user input.
    Question: Creatin Of Processes In Linux
    +
    A new process is normally created when an exis ting processmakes an exact copy of itself in memory. The child process will have the same environment as itsparent, but only the process ID number is different. There are two conventional ways used for creating a newprocess in Linux: Using The System()Function – this method is relativelysimple, however, it’s inefficient and hassignificantly certain security ris ks. Using fork() andexec() Function – this technique is alittle advanced but offers greater flexibility, speed,together with security.
    Question: Creation Of Processes In Linux
    +
    32/71 Because Linux is a multi-user system, meaning differentusers can be running various programs on the system, each running instance of aprogram must be identified uniquely by the kernel. And a program is identified by its process ID(PID) as well asit’s parent processes ID (PPID), therefore processes canfurther be categorized into: Parent processes– these are processes that create otherprocesses during run- time. Child processes– these processes are created by otherprocesses during run-time.
    Question: What is Init Process Linux
    +
    lnitprocess is the mother (parent) of all processes on the system,it’s the first program that is executed when the Linux system bootsup; it manages all other processes on the system. It is started by thekernel itself, so in principle it does not have a parent process. The init process always has process ID of1. It functions as anadoptive parent for all orphaned processes. You can use thepidof commandto find the ID of a process: # pidof systemd # pidof top # pidof httpd Find Linux Process ID To find the process ID and parent process ID of the currentshell, run: $ echo $$ $ echo $PPID
    Question: What Are Different States Of A Processes InLinux
    +
    During execution, a process changes from one state toanother depending on its environment/circumstances. In Linux, a process has thefollowing possible states: Running– here it’s either running (it is thecurrent process in the system) or it’s ready to run(it’s waiting to be assigned to one of the CPUs). Waiting– in this state, a process is waiting foran event to occur or for a system resource. Additionally,the kernel also differentiates between two types of waiting processes;interruptible waiting processes – can be interrupted by signals anduninterruptible waiting processes – are waiting directly on hardwareconditions and cannot be interrupted by any event/signal. Stopped– in this state, a process has beenstopped, usually by receiving a signal. For instance, a process that is being debugged. 33/71 Zombie– here, a process is dead, it has beenhalted but it’s still has an entry in the processtable.
    Question: How To View Active Processes In Linux
    +
    There are several Linux tools for viewing/lis ting runningprocesses on the system, the two traditional and well known are ps andtop commands: ps Command It dis plays information about a selection of the activeprocesses on the system as shown below: #ps #ps -e ] head top – System Monitoring Tool top is a powerful toolthat offers you a dynamic real-timeview of a running system as shown in the screenshot below: #top glances – System Monitoring Tool glancesis arelatively new system monitoring toolwith advanced features: #glances
    Question: How To ControlProcess
    +
    Linux also has some commands for controlling processes suchas kill, pkill, pgrep and killall, below are a few basic examples of how to usethem: $ pgrep -u tecmint top $ kill 2308 $ pgrep -u tecmint top $ pgrep -u tecmint glances $ pkill glances $ pgrep -u tecmint glances
    Question: Can We Send signals To Processes In Linux
    +
    The fundamental way of controlling processes in Linux is bysending signals to them. There are multiple signals that you can send to aprocess, to view all the signals run: 34/71 $ kill -l Lis t All Linux Signals To send a signal to a process, use the kill, pkill or pgrepcommands we mentioned earlier on. But programs can only respond to signals ifthey are programmed to recognize those signals. And most signals are for internal use by the system, or forprogrammers when they write code. The following are signals which are useful toa system user: SIGHUP 1– sent to a process when its controllingterminal is closed. SIGINT 2– sent to a process by its controllingterminal when a user interrupts the process by pressing [Ctrl+C] . SIGQUIT 3– sent to a process if the user sends aquit signal SIGKILL 9– this signal immediately terminates(kills) a process and the process will not perform anyclean-up operations. SIGTERM 15– this a program termination signal (killwill send this by default). SIGTSTP 20– sent to a process by its controllingterminal to request it to stop (terminal stop); initiated by the userpressing
    Question: How To Change Priority Of A Processes InLinux
    +
    On the Linux system, all active processes have a priorityand certain nice value. Processes with higher priority will normally get moreCPU time than lower priority processes. However, a system user with root privileges can influencethis with thenice andrenicecommands. From the output of the top command, the NI shows the processnice value: $ top Lis t Linux Running Processes Use thenicecommand to set a nice value for a process. Keepin mind that normal users can attribute a nice value from zero to 20 toprocesses they own. Only the root user can use negative nice values. Torenicethe priority of a process, use therenice command asfollows: $ renice +8 2687 $ renice +8 2103 GIT DevOps Interview Questions
    Question: What is Git
    +
    35/71 Git is a version controlsystem for tracking changes incomputer files and coordinating work on those files among multiplepeople. It is primarily used for source code management in softwaredevelopment but it can be used to keep track of changes in any set offiles. As a dis tributed revis ion controlsystem it is aimed atspeed, data integrity, and support for dis tributed, non-linear workflows. By far, the most widely used modern version controlsystemin the world today is Git. Git is a mature, actively maintained open sourceproject originally developed in 2005 by Linus Torvald. Git is an example of aDis tributed Version ControlSystem, In Git, every developer's working copyof the code is also a repository that can contain the full his tory of allchanges.
    Question: What Are Benefits Of GIT
    +
    Here are some of the advantages of using Git Ease of use Data redundancy andreplication High availability Superior dis k utilization andnetwork performance Only one .git directory perrepository Collaboration friendly Any kind of projects fromlarge to small scale can use GIT
    Question: What is Repository In GIT
    +
    The purpose of Git is to manage a project, or a set offiles, as they change over time. Git stores this information in a data structurecalled a repository. A gitrepository contains, among other things, thefollowing: A set ofcommit objects. A set of references to commitobjects, called heads. The Git repository is stored in the same directory as theproject itself, in a subdirectory called .git . Note differences from central-repository systems like CVS orSubversion: There is only one .git directory, in the root directory of theproject. The repository is stored infiles alongside the project. There is no central server repository.
    Question: What is Staging Area In GIT
    +
    36/71 Staging is a step before the commit process in git. That is ,a commit in git is performed in two steps: -Staging and -Actual commit As long as a change set is in the staging area, git allowsyou to edit it as you like (replace staged files with other versions of staged files,remove changes from staging, etc.)
    Question: What is GIT STASH
    +
    Often, when you’ve been working on part of yourproject, things are in a messy state and you want to switch branches for a bitto work on something else. The problem is , you don’t want to do a commit ofhalf-done work just so you can get back to this point later. The answer to this is sue is the git stash command.Stashing takes the dirty state of your working directory — that is , yourmodified tracked files and staged changes — and saves it on a stack ofunfinis hed changes that you can reapply at any time.
    Question: How To Revert Commit In GIT
    +
    Given one or more exis ting commits, revert the changes thatthe related patches introduce, and record some new commits that record them.This requires your working tree to be clean (no modifications from the HEADcommit). git-revert - Revert some exis ting commits SYNOPSis git revert [--[no-]edit] [-n] [-m parent-number] [-s][-S[ ]] … git revert --continue git revert --quit git revert --abort
    Question: How To Delete Remote Repository In GIT
    +
    Use the git remote rm command to remove a remote URL from your repository. The git remote rm command takes one argument: A remote name, for example,
    Questions: What is GIT Stash Drop
    +
    37/71 In case we do not need a specific stash, we use git stashdrop command to remove it from the lis t of stashes. By default, this command removes to latest addedstash To remove a specific stash we specify as argument in the gitstash drop command.
    Question: What is Difference Between GIT andSubversion
    +
    Here is a summary of Differences between GIT andSubversion Git is a dis tributed VCS; SVNis a non-dis tributed VCS. Git has a centralized serverand repository; SVN does not have a centralized server orrepository. The content in Git is storedas metadata; SVN stores files of content. Git branches are easier towork with than SVN branches. Git does not have the globalrevis ion number feature like SVN has. Git has better contentprotection than SVN. Git was developed for Linuxkernel by Linus Torvalds; SVN was developed by CollabNet,Inc. Git is dis tributed under GNU,and its maintenance overseen by Junio Hamano; Apache Subversion, or SVN, is dis tributed under theopen source license.
    Question: What is Difference Between GIT Fetch & GITPull
    +
    GIT fetch – It downloads only the new data from theremote repository and does not integrate any of the downloaded data into yourworking files. Providing a view of the data is all it does. GIT pull – It downloads as well as merges the datafrom the remote repository into the local working files. This may also lead to merging conflicts if the user’slocal changes are not yet committed. Using the “GIT stash” commandhides the local changes.
    Question:What is Git forkHow to create tag
    +
    A fork is a copy of a repository. Forking a repositoryallows you to freely experiment with changes without affecting the originalproject. A fork is really a Github (not Git) construct to store aclone of the repo in your user account. As a clone, it will contain all thebranches in the main repo at the time you made the fork. 38/71 Create Tag: Click the releases link on ourrepository page. Click on Create a new releaseor Draft a new release. Fill out the form fields, thenclick Publis h release at the bottom. After you create your tag onGitHub, you might want to fetch it into your local repository too: git fetch.
    Question: What is difference between fork andbranch
    +
    A fork is a copy of a repository. Forking a repositoryallows you to freely experiment with changes without affecting the originalproject. A fork is really a Github (not Git) construct to store aclone of the repo in your user account. As a clone, it will contain all thebranches in the main repo at the time you made the fork.
    Question: What is Cherry Picking In GIT
    +
    Cherry picking in git means to choose a commit from onebranch and apply it onto another. This is in contrast with other ways such as merge and rebasewhich normally applies many commits onto a another branch. Make sure you are on the branch you want apply the committo. git checkout master Execute the following: git cherry-pick
    Question: What Language GIT is Written In
    +
    Much of Git is written in C, along with some BASH scriptsfor UI wrappers and other bits.
    Question: How To Rebase Master In GIT
    +
    Rebasing is the process of moving a branch to a new basecommit.The golden rule of git rebase is to never use it on publicbranches. The only way to synchronize the two master branches is tomerge them back together, resulting in an extra merge commit and two sets ofcommits that contain the same changes.
    Question: What is ‘head’in git and how many heads can be created in a repository
    +
    image 39/71 There can be any number of heads in a GIT repository. Bydefault there is one head known as HEAD in each repository in GIT. HEADis a ref(reference) to the currently checked out commit. In normal states, it'sactually a symbolic ref to the branch user has checked out. if you look at the contents of .git/HEAD you'll see something like"ref: refs/heads/master". The branch itself is a reference to the commit at thetip of the branch
    Question: Name some GIT commands and also explain theirfunctions
    +
    Here are some most important GIT commands GIT diff– It shows the changes between commits,commits and working tree. GIT status– It shows the difference between workingdirectories and index. GIT stash applies– It is used to bring back the savedchanges on the working directory. GIT rm– It removes the files from the stagingarea and also of the dis k. GIT log– It is used to find the specific commit inthe his tory. GIT add– It adds file changes in the exis tingdirectory to the index. GIT reset– It is used to reset the index and as wellas the working directory to the state of the lastcommit. GIT checkout– It is used to update the directories ofthe working tree with those from another branch withoutmerging. GIT is tree– It represents a tree object including themode and the name of each item. GITinstaweb – It automatically directs a webbrowser and runs the web server with an interface intoyour local repository.
    Question: What is a “conflict” in GIT and how is it resolved
    +
    When a commit that has to be merged has some changes in oneplace, which also has the changes of current commit, then the conflictaris es. The GIT will not be able to predict which change will takethe precedence. In order to resolve the conflict in GIT: we have to edit thefiles to fix the conflicting changes and then add the resolved files by runningthe “GIT add” command; later on, to commit the 40/71 repaired merge run the “GIT commit” command.GIT identifies the position and sets the parents of the commit correctly.
    Question: How To Migrate From Subversion To GIT
    +
    SubGITis a toolforsmooth and stress-free subversion to GIT migration and also a solution for acompany-wide subversion to GIT migration that is : It allows to make use of allGIT and subversion features. It provides genuinestress-free migration experience. It doesn’t require anychange in the infrastructure that is already placed. It is considered to be muchbetter than GIT-SVN
    Question: What is Index In GIT
    +
    The index is a single, large, binary file in under .gitfolder, which lis ts all files in the current branch, their sha1 checksums, timestamps and the file name. Before completing the commits, it is formatted andreviewed in an intermediate area known as Index also known as the stagingarea.
    Question: What is a bare Git repository
    +
    A bare Git repository is a repository that is createdwithout a Working Tree. git init --bare
    Question: WHow do you revert a commit that has already beenpushed and made public
    +
    One or more commits can be reverted through the use of git revert . This command, inessence, creates a new commit with patches that cancel out the changesintroduced in specific commits. In case the commit that needs to be reverted has alreadybeen publis hed or changing the repository his tory is not an option, git revert can be used torevert commits. Running the following command will revert the last twocommits: git revert HEAD~2..HEAD 41/71 Alternatively, one can always checkout the state of aparticular commit from the past, and commit it anew.
    Question: How do you squash last N commits into a singlecommit
    +
    Squashing multiple commits into a single commit willoverwrite his tory, and should be done with caution. However, this is useful whenworking in feature branches. To squash the last N commits of the current branch, run thefollowing command (with {N} replaced with the number of commits that you want tosquash): git rebase -i HEAD~{N} Upon running this command, an editor will open with a lis tof these N commit messages, one per line. Each of these lines will begin with the word“pick”. Replacing “pick” with “squash” or“s” will tell Git to combine the commit with the commit beforeit. To combine all N commits into one, set every commit in thelis t to be squash except the first one. Upon exiting the editor, and if no conflict aris es, git rebase will allow you tocreate a new commit message for the new combined commit.
    Question: What is a conflict in git and how can it beresolved
    +
    A conflict aris es when more than one commit that has to bemerged has some change in the same place or same line of code. Git will not be able to predict which change should takeprecedence. This is a git conflict. To resolve the conflict in git, edit the files to fix theconflicting changes and then add the resolved files by running After that, to commit the repaired merge, run remembers thatyou are in the middle of a merge, so it sets the parents of the commitcorrectly.
    Question: How To Setup A Script To Run Every Time aRepository Receives New Commits Through Push
    +
    42/71 To configure a script to run every time a repositoryreceives new commits through push, one needs to define either a pre-receive,update, or a post-receive hook depending on when exactly the script needs to betriggered. Pre-receive hook in the destination repository is invokedwhen commits are pushed to it. Any script bound to this hook will be executedbefore any references are updated. This is a useful hook to run scripts that help enforcedevelopment policies. Update hook works in a similar manner to pre-receive hook,and is also triggered before any updates are actually made. However, the update hook is called once for every committhat has been pushed to the destination repository. Finally, post-receive hook in the repository is invokedafter the updates have been accepted into the destination repository. This is an ideal place to configure simple deploymentscripts, invoke some continuous integration systems, dis patch notificationemails to repository maintainers, etc. Hooks are local to every Git repository and are notversioned. Scripts can either be created within the hooks directory inside the“.git” directory, or they can be created elsewhere and links tothose scripts can be placed within the directory.
    Question: What is Commit Hash
    +
    In Git each commit is given a unique hash. These hashes canbe used to identify the corresponding commits in various scenarios (such aswhile trying to checkout a particular state of the code using the git checkout {hash} command). Additionally, Git also maintains a number of aliases tocertain commits, known as refs. Also, every tag that you create in the repositoryeffectively becomes a ref (and that is exactly why you can use tags instead ofcommit hashes in various git commands). Git also maintains a number of special aliases that changebased on the state of the repository, such as HEAD, FETCH_HEAD, MERGE_HEAD,etc. Git also allows commits to be referred as relative to oneanother. For example, HEAD~1 refers to the commit parent to HEAD, HEAD~2 refersto the grandparent of HEAD, and so on. In case of merge commits, where the commit has two parents,^ can be used to select one of the two parents, e.g. HEAD^2 can be used tofollow the second parent. And finally, refspecs. These are used to map local andremote branches together. However, these can be used to refer to commits that resideon remote branches allowing one to controland manipulate them from a local Gitenvironment. 43/71 image
    Question: What is Conflict In GIT
    +
    A conflict aris es when more than one commit that has to bemerged has some change in the same place or same line of code. Git will not be able to predict which change should takeprecedence. This is a git conflict.To resolve the conflict in git, edit thefiles to fix the conflicting changes and then add the resolved files by running git add . After that, to commit therepaired merge, run git commit . Gitremembers that you are in the middle of a merge, so it sets the parents of thecommit correctly.
    Question:What are githooks
    +
    Git hooks are scripts that can run automatically on theoccurrence of an event in a Git repository. These are used for automation ofworkflow in GIT. Git hooks also help in customizing the internal behavior ofGIT. These are generally used for enforcing a GIT commit policy.
    Question: What Are Dis advantages Of GIT
    +
    GIT has very few dis advantages. These are the scenarios whenGIT is difficult to use. Some of these are: Binary Files:If wehave a lot binary files (non-text) in our project, then GIT becomes very slow.E.g. Projects with a lot of images or Word documents. Steep Learning Curve:It takes some time for a newcomer to learn GIT. Some of the GITcommands are non-intuitive to a fresher. Slow remote speed:Sometimes the use of remote repositories in slow due to networklatency. Still GIT is better than other VCS in speed. image
    Question: What is stored inside a commit object inGIT
    +
    GIT commit object contains following information: SHA1 name:A 40character string to identify a commit Files:Lis t of filesthat represent the state of a project at a specific point of time Reference:Anyreference to parent commit objects 44/71
    Question: What is GIT reset command
    +
    Git reset command is used to reset current HEAD to aspecific state. By default it reverses the action of git add command. So we usegit reset command to undo the changes of git add command. Reference: Anyreference to parent commit objects
    Question:How GIT protects the code in arepository
    +
    GIT is made very secure since it contains the source code ofan organization. All the objects in a GIT repository are encrypted with ahashing algorithm called SHA1. This algorithm is quite strong and fast. It protects sourcecode and other contents of repository against the possible maliciousattacks. This algorithm also maintains the integrity of GITrepository by protecting the change his tory against accidental changes. Continuos Integration Interview Questions
    Question: What is Continuos Integration
    +
    Continuous Integration is the process of continuouslyintegrating the code and often multiple times per day. The purpose is to findproblems quickly, s and deliver the fixes more rapidly. CI is a best practice for software development. It is doneto ensure that after every code change there is no is sue in software.
    Question: What is Build Automation
    +
    image Build automation is the process of automating the creationof a software build and the associated processes. Including compiling computer source code into binary code,packaging binary code, and running automated tests.
    Question: What is Automated Deployment
    +
    Automated Deployment is the process of consis tently pushinga product to various environments on a “trigger.” 45/71 It enables you to quickly learn what to expect every timeyou deploy an environment with much faster results. This combined with Build Automation can save developmentteams a significant amount of hours. Automated Deployment saves clients from being extensivelyoffline during development and allows developers to build while“touching” fewer of a clients’ systems. With an automated system, human error is prevented. In theevent of human error, developers are able to catch it before live deployment– saving time and headache. You can even automate the contingency plan and make the siterollback to a working or previous state as if nothing ever happened. Clearly, this automated feature is super valuable inallowing applications and sites to continue during fixes. Additionally, contingency plans can be version-controlled,improved and even self-tested.
    Question: How Continuous Integration Implemented
    +
    Different tools for supporting Continuous Integration areHudson, Jenkins and Bamboo. Jenkins is the most popular one currently. Theyprovide integration with various version controlsystems and build tools.
    Question: How Continuous Integration process doeswork
    +
    Whenever developer commits changes in version controlsystem, then Continuous Integration server detects that changes are committed.And goes through following process Continuous Integration serverretrieves latest copy of changes. It build code with new changesin build tools. If build fails notify todeveloper. After build pass run automatedtest cases if test cases fail notify to developer. Create package for deploymentenvironment.
    Question: What Are The Software Required For ContinuousIntegration process
    +
    image Here are the minimum tools you need to achieve CI Source code repository : Tocommit code and changes for example git. Server: It is ContinuousIntegration software for example Jenkin, Teamcity. 46/71 Build tool: It buildsapplication on particular way for example maven, gradle. Deployment environment : Onwhich application will be deployed.
    Question:What is JenkinsSoftware
    +
    Jenkins is self-contained, open source automation serverused to automate all sorts of tasks related to building, testing, and deliveringor deploying software. Jenkins is one of the leading open source automation serversavailable. Jenkins has an extensible, plugin-based architecture, enablingdevelopers to create 1,400+ plugins to adapt it to a multitude of build, testand deployment technology integrations.
    Questions: What is a Jenkins Pipeline
    +
    Jenkins Pipeline (or simply “Pipeline”) is asuite of plugins which supports implementing and integrating continuous deliverypipelines into Jenkins..
    Question: What is the differencebetween Maven, Ant,Gradle and Jenkins
    +
    Maven and Ant are Build Technologies whereas Jenkins is acontinuous integration tool.
    Question:Why do we useJenkins
    +
    Jenkins is anopen-sourcecontinuous integration software toolwritten inthe Java programming language for testing and reporting on is olated changes in alarger code base in real time. TheJenkins softwareenables developers to find and solve defects in acode base rapidly and to automate testing of their builds.
    Question:What are CITools
    +
    Here is the lis t of the top 8Continuous Integration tools: Jenkins TeamCity Travis CI Go CD Bamboo GitLab CI 47/71 CircleCI Codeship
    Question:Which SCM toolsJenkins supports
    +
    Jenkins supports version controltools, including AccuRev,CVS, Subversion, Git, Mercurial, Perforce, ClearCase and RTC, and can executeApache Ant, Apache Maven and arbitrary shell scripts and Windows batchcommands.
    Question:Why do we usePipelines in Jenkins
    +
    Pipeline adds a powerful set of automation tools ontoJenkins, supporting use cases that span from simple continuous integration tocomprehensive continuous delivery pipelines. By modeling a series of related tasks, users can takeadvantage of the many features of Pipeline: Code: Pipelines areimplemented in code and typically checked into source control, giving teams theability to edit, review, and iterate upon their delivery pipeline. Durable: Pipelines can surviveboth planned and unplanned restarts of the Jenkins master. Pausable: Pipelines canoptionally stop and wait for human input or approval before continuing the Pipeline run. Versatile: Pipelines supportcomplex real-world continuous delivery requirements,including the ability to fork/join, loop, and perform work in parallel. Extensible: The Pipelineplugin supports custom extensions to its DSL and multiple options forintegration with other plugins.
    Question: How do you createMultibranch Pipeline in Jenkins
    +
    The Multi branch Pipeline project type enables you toimplement different Jenkins files for different branches of the sameproject. In a Multi branch Pipeline project, Jenkins automaticallydis covers, manages and executes Pipelines for branches which contain a Jenkinsfile in source control.
    Question:What are Jobs inJenkins
    +
    Jenkinscan be usedto perform the typical build server work, such as doingcontinuous/official/nightly builds, run tests, or perform some repetitive batchtasks. This is called “free-style softwareproject” in Jenkins. 48/71
    Question:How do you configuringautomatic builds in Jenkins
    +
    Builds in Jenkinscanbe triggered periodically (on a schedule, specified in configuration), or whensource changes in the project have been detected, or they can be automaticallytriggered by requesting the URL:
    Question:What is a Jenkinsfile
    +
    Jenkins file is a text file containing the definition of aJenkins Pipeline and checks into source control. Amazon AWS DevOps Interview Questions
    Question: What is Amazon WebServices
    +
    Amazon Web Services provides services that help you practiceDevOps at your company and that are built first for use with AWS. These tools automate manual tasks, help teams manage complexenvironments at scale, and keep engineers in controlof the high velocity thatis enabled by DevOps
    Question: What Are Benefits Of AWS for DevOps
    +
    There are many benefits of using AWS for devops Get Started Fast:Each AWS service is ready to use if you have an AWS account. There is no setup required or software to install. Fully Managed Services:These services can help you take advantage of AWS resources quicker.You can worry less about setting up, installing, and operating infrastructure onyour own. This lets you focus on your core product. Built For Scalability:You can manage a single instance or scale to thousands using AWSservices. These services help you make the most of flexible compute resources bysimplifying provis ioning, configuration, and scaling. Programmable:Youhave the option to use each service via the AWS Command Line Interface orthrough APis and SDKs. You can also model and provis ion AWS resources and yourentire AWS infrastructure using declarative AWS CloudFormation templates. Automation:AWS helpsyou use automation so you can build faster and more efficiently. Using AWSservices, you can automate manual tasks or processes such as deployments, 49/71 development & test workflows, container management, andconfiguration management. Secure:Use AWSIdentity and Access Management (IAM) to set user permis sions and policies. This gives you granular controlover who can access your resources and how theyaccess those resources.
    Question: How To Handle ContinuousIntegration and Continuous Delivery in AWS Devops
    +
    The AWS Developer Tools help in securely store and versionyour application’s source code and automatically build, test, and deployyour application to AWS.
    Question: What is The Importance Of Buffer In Amazon WebServices
    +
    image An Elastic Load Balancer ensures that the incoming trafficis dis tributed optimally across various AWS instances. A buffer will synchronize different components and makes thearrangement additional elastic to a burst of load or traffic. The components are prone to work in an unstable way ofreceiving and processing the requests. The buffer creates the equilibrium linking various apparatusand crafts them effort at the identical rate to supply more rapidservices.
    Question: What Are The Components Involved In Amazon WebServices
    +
    image There are 4 components Amazon S3: withthis , one can retrieve the key information which are occupied in creating cloudstructural design and amount of produced information also can be stored in this component that is the consequence of the key specified. Amazon EC2 instance:helpful to run a large dis tributed system on the Hadoop cluster. Automaticparallelization and job scheduling can be achieved by this component. Amazon SQS: this component acts as a mediator between different controllers. Also worn forcushioning requirements those are obtained by the manager of Amazon. Amazon SimpleDB:helps in storing the transitional position log and the errands executed by theconsumers. 50/71 image
    Question: How is a Spot instance different from anOn-Demand instance or Reserved Instance
    +
    image Spot Instance, On-Demand instance and Reserved Instances areall models for pricing. Moving along, spot instances provide the ability forcustomers to purchase compute capacity with no upfront commitment, at hourlyrates usually lower than the On-Demand rate in each region. Spot instances are just like bidding, the bidding price is called Spot Price. The Spot Price fluctuates based on supply and demand forinstances, but customers will never pay more than the maximum price they havespecified. If the Spot Price moves higher than a customer’smaximum price, the customer’s EC2 instance will be shut downautomatically. But the reverse is not true, if the Spot prices come downagain, your EC2 instance will not be launched automatically, one has to do thatmanually. In Spot and On demand instance, there is no commitment forthe duration from the user side, however in reserved instances one has to stickto the time period that he has chosen.
    Questions: What are the best practicesfor Security in Amazon EC2
    +
    There are several best practices to secure Amazon EC2. A fewof them are given below: Use AWS Identity and AccessManagement (IAM) to controlaccess to your AWS resources. Restrict access by onlyallowing trusted hosts or networks to access ports on yourinstance. Review the rules in yoursecurity groups regularly, and ensure that you apply theprinciple of least Privilege – only open uppermis sions that you require. Dis able password-based loginsfor instances launched from your AMI. Passwords can befound or cracked, and are a security ris k.
    Question:What is AWSCodeBuild in AWSDevops
    +
    AWS CodeBuild is a fully managed build service that compilessource code, runs tests, and produces software packages that are ready todeploy. With CodeBuild, you don’t need to provis ion, manage,and scale your own build servers. CodeBuild scales continuously and processesmultiple builds concurrently, so your builds are not left waiting in aqueue. 51/71
    Question:What is AmazonElastic Container Service in AWSDevops
    +
    Amazon Elastic Container Service (ECS) is a highly scalable,high performance container management service that supports Docker containersand allows you to easily run applications on a managed cluster of Amazon EC2instances.
    Question:What is AWS Lambdain AWS Devops
    +
    AWS Lambda lets you run code without provis ioning ormanaging servers. With Lambda, you can run code for virtually any type ofapplication or backend service, all with zero adminis tration. Just upload your code and Lambda takes care of everythingrequired to run and scale your code with high availability. Splunk DevOps Interview Questions
    Question: What is Splunk
    +
    The platform of Splunk allows you to get vis ibility intomachine data generated from different networks, servers, devices, andhardware. It can give insights into the application management, threatvis ibility, compliance, security, etc. so it is used to analyze machine data.The data is collected from the forwarder from the source and forwarded to theindexer. The data is stored locally on a host machine or cloud. Then on the datastored in the indexer the search head searches, vis ualizes, analyzes andperforms various other functions.
    Question: What Are The Components Of Splunk
    +
    The main components of Splunk are Forwarders, Indexers andSearch Heads.Deployment Server(or Management Console Host) will come into thepicture in case of a larger environment. Deployment servers act like an antivirus policy server forsetting up Exceptions and Groups so that you can map and create adifferent setof data collection policies each for either window based server or a Linux basedserver or a Solaris based server.plunk has fourimportant components : Indexer –It indexes the machine data Forwarder –Refers to Splunk instances that forward data tothe remote indexers Search Head –Provides GUI for searching Deployment Server– Manages the Splunk components likeindexer, forwarder, and 52/71 search head in computing environment.
    Question:What are alerts inSplunk
    +
    An alert is an action that a saved search triggers onregular intervals set over a time range, based on the results of thesearch. When the alerts are triggered, various actions occurconsequently.. For instance, sending an email when a search to the predefinedlis t of people is triggered. Three types of alerts: Pre-result alerts :Mostcommonly used alert type and runs in real-time for an all- time span. Thesealerts are designed such that whenever a search returns a result, they aretriggered. Scheduled alerts :Thesecond most common- scheduled results are set up to evaluate the results ofa his torical search result running over a set time range on a regularschedule. You can define a time range, schedule and the trigger condition toan alert. Rolling-window alerts:These are the hybrid of pre-result and scheduled alerts. Similar tothe former, these are based on real-time search but do not trigger each timethe search returns a matching result . It examines all events in real-timemapping within the rolling window and triggers the time that specificcondition by that event in the window is met, like the scheduled alert is triggered on a scheduled search.
    Question: What Are The Categories Of SPL Commands
    +
    SPL commands are divided into five categories: Sorting Results –Ordering results and (optionally) limiting the number ofresults. Filtering Results –It takes a set of events or results and filters them into a smallerset of results. Grouping Results –Grouping events so you can see patterns. Filtering, Modifying and Adding Fields –Taking search results and generating asummary for reporting. Reporting Results –Filtering out some fields to focus on the ones you need, ormodifying or adding fields to enrich your results or events.
    Question: What Happens If The LicenseMaster is Unreachable
    +
    In case the license master is unreachable, then it is justnot possible to search the data. 53/71 However, the data coming in to the Indexer will not beaffected. The data will continue to flow into your Splunk deployment. The Indexers will continue to index the data as usualhowever, you will get a warning message on top your Search head or web UI sayingthat you have exceeded the indexing volume. And you either need to reduce the amount of data coming inor you need to buy a higher capacity of license. Basically, the candidate is expected to answer that the indexing does not stop; only searching is halted
    Question: What are common port numbers used bySplunk
    +
    Common port numbers on which default services runare: Service PortNumber Splunk Management Port 8089 Splunk Index Replication Port 8080 KV store 8191 Splunk Web Port 8000 Splunk Indexing Port 9997 Splunk network port 514
    Question: What Are Splunk BucketsExplain The BucketLifecycle
    +
    image A directory that contains indexed data is known as a Splunkbucket. It also contains events of a certain period. Bucket lifecycle includesfollowing stages: Hot –It contains newly indexed data and is open forwriting. For each index, there are one or more hot buckets available Warm –Data rolled from hot Cold –Data rolled from warm Frozen –Data rolled from cold. The indexer deletes frozendata by default but users can also archive it. Thawed –Data restored from an archive. If you archivefrozen data , you can later return it to the index bythawing (defrosting) it.
    Question:Explain Data Modelsand Pivot
    +
    54/71 Data models are used for creating a structured hierarchicalmodel of data. It can be used when you have a large amount of unstructured data,and when you want to make use of that information without using complex searchqueries. A few use cases of Data models are: Create Sales Reports:If you have a sales report, then you can easilycreate the total number of successful purchases, belowthat you can create a child object containing the lis t of failed purchases andother views Set Access Levels:If you want a structured view of users and theirvarious access levels, you can use a data model On the other hand with pivots, you have the flexibility tocreate the front views of your results and then pick and choose the mostappropriate filter for a better view of results.
    Question: What is FilePrecedence In Splunk
    +
    File precedence is an important aspect of troubleshooting inSplunk for an adminis trator, developer, as well as an architect. All of Splunk’s configurations are written in .conffiles. There can be multiple copies present for each of these files, and thus itis important to know the role these files play when a Splunk instance is runningor restarted. To determine the priority among copies of a configuration file,Splunk software first determines the directory scheme. The directory schemes areeither a) Global or b) App/user. When the context is global (that is , wherethere’s no app/user context), directory priority descends in this order: System local directory — highestpriority App local directories App default directories System default directory — lowestpriority When the context is app/user, directory priority descendsfrom user to app to system: User directories for current user — highest priority App directories for currently running app (local, followed bydefault) App directories for all other apps (local, followed by default)— for exported settings only System directories (local, followed by default) — lowest priority
    Question: D ifference Between SearchTime And Index Time Field Extractions
    +
    image Search time field extraction refers to the fields extractedwhile performing searches. Whereas, fields extracted when the data comes to the indexerare referred to as Index time field extraction. 55/71 You can set up the indexer time field extraction either atthe forwarder level or at the indexer level. Another difference is that Search time fieldextraction’s extracted fields are not part of the metadata, so they do notconsume dis k space. Whereas index time field extraction’s extracted fieldsare a part of metadata and hence consume dis k space.
    Question:What is Source TypeIn Splunk
    +
    Source type is a default field which is used to identify thedata structure of an incoming event. Source type determines how SplunkEnterpris e formats the data during the indexing process. Source type can be set at the forwarder level for indexerextraction to identify different data formats.
    Question: What is SOS
    +
    SOS stands for Splunk on Splunk. It is a Splunk app thatprovides graphical view of your Splunk environment performance andis sues. It has following purposes: Diagnostic toolto analyze andtroubleshoot problems Examine Splunk environmentperformance Solve indexing performanceis sues Observe scheduler activitiesand is sues See the details of schedulerand user driven search activity image Search, view andcompare configuration files of Splunk
    Question: What is Splunk Indexer And Explain ItsStages
    +
    The indexer is a Splunk Enterpris e component that createsand manages indexes. The main functions of an indexer are: Indexing incoming data Searching indexed dataSplunk indexer has following stages: Input :SplunkEnterpris e acquires the raw data from various input sources and breaks it into64K blocks and assign them some metadata keys. These keys include host, sourceand source type of the data.Parsing :Also known as event processing, during this stage, the Enterpris e analyzes and transforms the data, breaks data intostreams, identifies, parses and sets timestamps, performs metadata annotationand transformation of data. Indexing :In this phase, the parsed events are written onthe dis k index including both compressed data and the associated index files.Searching : The‘Search’ function plays a 56/71 major role during this phase as it handles all searchingaspects (interactive, scheduled searches, reports, dashboards, alerts) on theindexed data and stores saved searches, events, field extractions andviews
    Question: State The Difference Between Stats andEventstats Commands
    +
    Stats –This command produces summary statis tics of all exis ting fields in your searchresults and store them as values in new fields.Eventstats – It is same asstats command except that aggregation results are added in order to every eventand only if the aggregation is applicable to that event. It computes therequested statis tics similar to stats but aggregates them to the original rawdata. log4J DevOps Interview Questions
    Question: What is log4j
    +
    log4j is a reliable, fast and flexible logging framework(APis ) written in Java, which is dis tributed under the Apache SoftwareLicense. log4j has been ported to the C, C++, C#, Perl, Python, Ruby,and Eiffel languages. log4j is highly configurable through external configurationfiles at runtime. It views the logging process in terms of levels of prioritiesand offers mechanis ms to direct logging information to a great variety ofdestinations. Question:What Are TheFeatures Of Log4j Log4j is widely used framework and here are features oflog4j It is thread-safe.It is optimized for speed It is based on a named loggerhierarchy. It supports multiple outputappenders per logger. It supportsinternationalization. It is not restricted to apredefined set of facilities. Logging behavior can be set atruntime using a configuration file. It is designed to handle JavaExceptions from the start. It uses multiple levels,namely ALL, TRACE, DEBUG, INFO, WARN, ERROR and FATAL. The format of the log outputcan be easily changed by extending the Layout class. The target of the log outputas well as the writing strategy can be altered by implementations of the Appender interface. It is fail-stop. However,although it certainly strives to ensure delivery, log4j does not 57/71 guarantee that each log statement will be delivered to itsdestination.
    Question: What are the components oflog4j
    +
    log4j has three main components loggers: Responsible forcapturing logging information. appenders: Responsible forpublis hing logging information to various preferred destinations. layouts: Responsible forformatting logging information in different styles.
    Question: How do you initialize and use Log4J
    +
    public class LoggerTest { static Logger log =Logger.getLogger (LoggerTest.class.getName()); public void my logerMethod() {if(log.is DebugEnabled()) log.debug("This is test message" + var2); ) }}
    Question:What are Pros andCons of Logging
    +
    Following are the Pros and Cons of Logging Logging is animportant component of the software development. A well-written logging codeoffers quick debugging, easy maintenance, and structured storage of anapplication's runtime information. Logging does have its drawbacks also. Itcan slow down an application. If too verbose, it can cause scrolling blindness.To alleviate these concerns, log4j is designed to be reliable, fast andextensible. Since logging is rarely the main focus of an application, the log4jAPI strives to be simple to understand and to use.
    Question: What is The Purpose OfLogger Object
    +
    Logger Object − The top-level layer of log4jarchitecture is the Logger which provides the Logger object. The Logger object is responsible for capturing logginginformation and they are stored in a namespace hierarchy.
    Question: What is the purpose of Layout object
    +
    58/71 The layout layer of log4j architecture provides objectswhich are used to format logging information in different styles. It providessupport to appender objects before publis hing logging information. Layout objects play an important role in publis hing logginginformation in a way that is human-readable and reusable.
    Questions: What is the purpose of Appender object
    +
    The Appender object is responsible for publis hing logginginformation to various preferred destinations such as a database, file, console,UNIX Syslog, etc.
    Question: What is The Purpose OfObjectRenderer Object
    +
    The ObjectRenderer object is specialized in providing aString representation of different objects passed to the loggingframework. This object is used by Layout objects to prepare the finallogging information.
    Question:What is LogManagerobject
    +
    The LogManager object manages the logging framework. It is responsible for reading the initial configuration parameters from a system-wideconfiguration file or a configuration class.
    Question: How Will You Define A FileAppender Using Log4j.properties
    +
    image Following syntax defines a file appender −log4j.appender.FILE=org.apache.log4j.FileAppenderlog4j.appender.FILE.File=${log}/log.out
    Question: W hat is The Purpose OfThreshold In Appender
    +
    Appender can have a threshold level associated with itindependent of the logger level. The Appender ignores any logging messages thathave a level lower than the threshold level. Docker DevOps Interview Questions
    Question: What is Docker
    +
    59/71 image Docker provides a container for managing software workloadson shared infrastructure, all while keeping them is olated from oneanother. Docker is a tooldesigned to make it easier to create,deploy, and run applications by using containers. Containers allow a developer to package up an applicationwith all of the parts it needs, such as libraries and other dependencies, andship it all out as one package. By doing so, the developer can rest assured that theapplication will run on any other Linux machine regardless of any customizedsettings that machine might have that could differ from the machine used forwriting and testing the code. In a way, Docker is a bit like a virtual machine.But unlike a virtual machine, rather than creating a whole virtual operatingsystem. Docker allows applications to use the same Linux kernel as the systemthat they're running on and only requires applications be shipped withthings not already running on the host computer. This gives a significantperformance boost and reduces the size of the application.
    Question:What Are LinuxContainers
    +
    Linux containers, in short, contain applications in a waythat keep them is olated from the host system that they run on. Containers allow a developer to package up an applicationwith all of the parts it needs, such as libraries and other dependencies, andship it all out as one package. And they are designed to make it easier to provide aconsis tent experience as developers and system adminis trators move code fromdevelopment environments into production in a fast and replicable way.
    Question: Who is Docker For
    +
    Docker is a toolthat is designed to benefit both developersand system adminis trators, making it a part of many DevOps (developers +operations) toolchains. For developers, it means that they can focus on writing codewithout worrying about the system that it will ultimately be running on. It also allows them to get a head start by using one ofthousands of programs already designed to run in a Docker container as a part oftheir application. For operations staff, Docker gives flexibility andpotentially reduces the number of systems needed because of its small footprintand lower overhead. 60/71
    Question:What is DockerContainer
    +
    Docker containers include the application and all of itsdependencies, but share the kernel with other containers, running as is olatedprocesses in user space on the host operating system. Docker containers are not tied to any specificinfrastructure: they run on any computer, on any infrastructure, and in anycloud. Now explain how to create a Docker container, Dockercontainers can be created by either creating a Docker image and then running itor you can use Docker images that are present on the Dockerhub. Dockercontainers are basically runtime instances of Docker images.
    Question:What is DockerImage
    +
    Docker image is the source of Docker container. In otherwords, Docker images are used to create containers. Images are created with the build command, and they’llproduce a container when started with run. Images are stored in a Docker regis try such asregis try.hub.docker.com because they can become quite large, images are designedto be composed of layers of other images, allowing a minimal amount of data tobe sent when transferring images over the network.
    Question:What is DockerHub
    +
    Docker hub is a cloud-based regis try service which allowsyou to link to code repositories, build your images and test them, storesmanually pushed images, and links to Docker cloud so you can deploy images toyour hosts. It provides a centralized resource for container imagedis covery, dis tribution and change management, user and team collaboration, andworkflow automation throughout the development pipeline.
    Question: What is Docker Swarm
    +
    Docker Swarm is native clustering for Docker. It turns apoolof Docker hosts into a single, virtual Docker host. Docker Swarm serves the standard Docker API, any toolthatalready communicates with a Docker daemon can use Swarm to transparently scaleto multiple hosts. 61/71 I will also suggest you to include some supportedtools: Dokku Docker Compose Docker Machine Jenkins
    Questions:What is Dockerfileused for
    +
    A Dockerfile is a text document that contains all thecommands a user could call on the command line to assemble an image. Using docker build users can create an automated build thatexecutes several command- line instructions in succession.
    Question: How is Docker different fromother container technologies
    +
    Docker containers are easy to deploy in a cloud. It can getmore applications running on the same hardware than other technologies. It makes it easy for developers to quickly create,ready-to-run containerized applications and it makes managing and deployingapplications much easier. You can even share containers with yourapplications.
    Question:How to create Dockercontainer
    +
    We can use Docker image to create Docker container by usingthe below command: 1 docker run -t-i command name This command will create and start a container. You shouldalso add, If you want to check the lis t of all running container with the statuson a host use the below command: 1 docker ps -a
    Question: How to stop and restart theDocker container
    +
    In order to stop the Docker container you can use the belowcommand: 1 docker stopcontainer ID Now to restart the Docker container you can use: 62/71 1 docker restartcontainer ID
    Question: What is the difference between docker run anddocker create
    +
    The primary difference is that using‘docker create’createsa container in a stopped state.Bonus point:You can use ‘docker create’andstore an outputed container ID for later use. The best way to do it is to use‘docker run’with --cidfile FILE_NAME as running it again won’t allow tooverwrite the file.
    Question: What four states a Docker container can bein
    +
    Running Paused Restarting Exited
    Question:What is Difference Between Repository and aRegis try
    +
    Docker regis try is a service for hosting and dis tributingimages. Docker repository is a collection of related Docker images.
    Question: How to link containers
    +
    The simplest way is to use network port mapping.There’s also the- -link flag which is deprecated.
    Question: What is the difference between Docker RUN, CMDand ENTRYPOINT
    +
    ACMDdoes not execute anything at build time, but specifies the intendedcommand for the image. RUNactually runs acommand and commits the result. If you would like your container to run the same executableevery time, then you should consider usingENTRYPOINTin combination withCMD.
    Question: How many containers can run per host
    +
    As far as the number of containers that can be run, this really depends on your 63/71 environment. The size of your applications as well as theamount of available resources will all affect the number of containers that canbe run in your environment. Containers unfortunately are not magical. They can’tcreate new CPU from scratch. They do, however, provide a more efficient way ofutilizing your resources. The containers themselves are super lightweight (remember,shared OS vs individual OS per container) and only last as long as the processthey are running. Immutable infrastructure if you will.
    Question:What is Dockerhub
    +
    Docker hub is a cloud-based regis try service which allowsyou to link to code repositories, build your images and test them, storesmanually pushed images, and links to Docker cloud so you can deploy images toyour hosts. It provides a centralized resource for container imagedis covery, dis tribution and change management, user and team collaboration, andworkflow automation throughout the development pipeline. VmWare DevOps Interview Questions
    Question: What is VmWare
    +
    VMware was founded in 1998 by five different IT experts. Thecompany officially launched its first product, VMware Workstation, in 1999,which was followed by the VMware GSX Server in 2001. The company has launchedmany additional products since that time. VMware's desktop software is compatible with all majorOSs, including Linux, Microsoft Windows, and Mac OS X. VMware provides threedifferent types of desktop software: VMware Workstation: This application is used to install and run multiple copies or instances of the sameoperating systems or different operating systems on a single physical computermachine. VMware Fusion: This productwas designed for Mac users and provides extra compatibility with all otherVMware products and applications. VMware Player: This productwas launched as freeware by VMware for users who do nothave licensed VMWare products. This product is intended only for personeluse. VMware's software hypervis ors intended for servers arebare-metal embedded hypervis ors that can run directly on the server hardwarewithout the need of an extra primary OS. VMware’s line of server softwareincludes: VMware ESX Server: This is anenterpris e-level solution, which is built to provide better functionality in comparis on to the freeware VMware Serverresulting from a lesser system overhead. VMware ESX is integrated with VMwarevCenter that provides additional solutions to improve the manageability andconsis tency of the server implementation. VMware ESXi Server: This server is similar to the ESX Server except that the service 64/71 console is replaced with BusyBox installation and itrequires very low dis k space to operate. VMware Server: Freewaresoftware that can be used over exis ting operating systems like Linux or Microsoft Windows.
    Question:What is Virtualization
    +
    The process of creating virtual versions of physicalcomponents i-e Servers, Storage Devices, Network Devices on a physical host is called virtualization. Virtualization lets you run multiple virtual machines on asingle physical machine which is called ESXi host.
    Question: What are different types ofvirtualization
    +
    There are 5 basic types of virtualization Server virtualization:consolidates the physical server and multiple OS can be run on a single server. Network Virtualization:Provides complete reproduction of physical network into a software defined network. Storage Virtualization:Provides an abstraction layer for physical storage resources to manage andoptimize in virtual deployment. Application Virtualization:increased mobility of applications and allows migration ofVMs from host on another with minimal downtime. Desktop Virtualization:virtualize desktop to reduce cost and increase service
    Question:What is ServiceConsole
    +
    The service console is developed based up on Redhat LinuxOperating system, it is used to manage the VMKernel
    Question:What is vCenterAgent
    +
    VC agent is an agent installed on ESX server which enablescommunication between VC and ESX server. This Agent will be installed on ESX/ESXi will be done whenyou try to add the ESx host in Vcenter.
    Question:What is VMKernel
    +
    65/71 VMWare Kernel is a Proprietary kernel of vmware and is notbased on any of the flavors of Linux operating systems. VMkernel requires an operating system to boot and manage thekernel. A service console is being provided when VMWare kernel is booted. Only service console is based up on Redhat Linux OS notVMkernel.
    Question: What is VMKernel and why itis important
    +
    VMkernel is a virtualization interface between a VirtualMachine and the ESXi host which stores VMs. It is responsible to allocate all available resources ofESXi host to VMs such as memory, CPU, storage etc. It’s also controlspecial services such as vMotion,Fault tolerance, NFS, traffic management and is CSI. To access these services, VMkernel port can be configured onESXi server using a standard or dis tributed vSwitch. Without VMkernel, hostedVMs cannot communicate with ESXi server.
    Question:What is hypervis orand its types
    +
    Hypervis or is a virtualization layer that enables multipleoperating systems to share a single hardware host. Each operating system or VM is allocated physical resourcessuch as memory, CPU, storage etc by the host. There are two types ofhypervis ors Hosted hypervis or (works asapplication i-e VMware Workstation) Bare-metal (is virtualizationsoftware i-e VMvis or, hyper-V which is installed directly onto the hardware and controls all physical resources).
    Questions:What is virtualnetworking
    +
    A network of VMs running on a physical server that areconnected logically with each other is called virtual networking.
    Question:What is vSS
    +
    vSS stands for Virtual Standard Switch is responsible forcommunication of VMs hosted on a single physical host. 66/71 it works like a physical switch automatically detects a VMwhich want to communicate with other VM on a same physical server.
    Question: What is VMKernal adapterand why it used
    +
    AVMKernel adapter provides network connectivity to the ESXihost to handle network traffic for vMotion, IP Storage, NAS, Fault Tolerance,and vSAN. For each type of traffic such as vMotion, vSAN etc. separateVMKernal adapter should be created and configured.
    Question: What are three port groupsare configured in ESXi networking
    +
    image Virtual Machine Port Group– Used for Virtual Machine Network Service Console Port Group– Used for Service Console Communications VMKernel Port Group– Used for VMotion, is CSI, NFS Communications
    Question: W hat are main components ofvCenter Server architecture
    +
    There are three main components of vCenter Serverarchitecture. image vSphere Client andWeb Client: a user interface. vCenter Server database: SQLserver or embedded PostgreSQL to store inventory, securityroles, resource pools etc. SSO: a security domain invirtual environment
    Question:What is datastore
    +
    A Datastore is a storage location where virtual machinefiles are stored and accessed. Datastore is based on a file system which is called VMFS, NFS
    Question:How many dis k typesare in VMware
    +
    There are three dis k types in vSphere. Thick Provis ioned Lazy Zeroes: every virtual dis k is created bydefault in this dis k format. Physical space is allocated to a VM whenvirtual dis k is created. It can’t be converted to thin dis k. Thick Provis ion Eager Zeroes: this dis k type is used in VMwareFault Tolerance. All required dis k space is allocated to a VM at time ofcreation. It takes more time to create a virtual dis k compare to other dis kformats. 67/71 Thin provis ion: It provides on-demand allocation of dis k space toa VM. When data size grows, the size of dis k will grow. Storage capacityutilization can be up to 100% with thin provis ioning.
    What is Storage vMotion
    +
    It is similar to traditional vMotion, in Storage vMotion,virtual dis k of a VM is moved from datastore to another. During Storage vMotion,virtual dis k types think provis ioning dis k can be transformed to thinprovis ioned dis k.
    Question:What is the use ofVMKernel Port
    +
    Vmkernel port is used by ESX/ESXi for vmotion, is CSI &NFS communications. ESXi uses Vmkernel as the management network since itdon’t have serviceconsole built with it.
    Question: What are different types ofPartitions in ESX server
    +
    AC/-root Swap /var /Var/core /opt /home /tmp
    Question: Explain What is VMware DRS
    +
    VMware DRS stands for Dis tributed Resource Scheduler; itdynamically balances resources across various host under cluster or resourcepool. It enables users to determine the rules and policies which decide howvirtual machines deploy resources, and these resources should be prioritized tomultiple virtual machines. DevOps Testing Interview Questions
    Question:What is ContinuousTesting
    +
    Continuous Testing is the process of executing automatedtests to obtain immediate feedback on the business ris ks associated with in thelatest build. In this way, each build is tested continuously, allowingDevelopment teams to get fast feedback so that they can prevent those problemsfrom progressing to the next stage of Software delivery life-cycle. Question:What is AutomationTesting Automation testing is a process of automating the manualtesting process. Automation testing involves use of separate testing tools,which can be executed repeatedly and 68/71 doesn’t require any manual intervention.
    Question: What Are The Benefits ofAutomation Testing
    +
    Here are some of the benefits of using Continuous Testing; Supports execution of repeatedtest cases Aids in testing a large testmatrix Enables parallelexecution Encourages unattendedexecution Improves accuracy therebyreducing human generated errors Saves time and money
    Question: Why is Continuous Testingimportant for DevOps
    +
    Continuous Testing allows any change made in the code to betested immediately. This avoids the problems created by having“big-bang” testing left to the end of the development cycle such asrelease delays and quality is sues. In this way, Continuous Testing facilitates more frequentand good quality releases.”
    Question: What are the Testing typessupported by Selenium
    +
    Selenium supports two types of testing: Regression Testing:It is the act of retesting a product around an area where a bug wasfixed. Functional Testing:It refers to the testing of software features (functional points)individually.
    Question: What is the DifferenceBetween Assert and Verify commands in Selenium
    +
    Assertcommand checkswhether the given condition is true or false. Verifycommand alsochecks whether the given condition is true or false. Irrespective of thecondition being true or false, the program execution doesn’t halts i.e.any failure during verification would not stop the execution and all the teststeps would be executed.
    Summary
    +
    DevOps refers to a wide range of tools, process andpractices used bycompanies to improve their build, deployment, testing andrelease life cycles. In order to ace a DevOps interview you need to have a deepunderstanding of all of these tools and processes. Most of the technologies and process used to implementDevOps are not is olated. Most probably you are already familiar with many ofthese. All you have to do is to prepare for these from DevOps perspective. In this guide I have created the largest set of interviewquestions. Each section in this guide caters to a specific area ofDevOps. In order to increase your chances of success in DevOps interview you need to go through all of these questions.

    Git Operations

    +
    Diffbet soft, mixed, hard reset?
    +
    Soft keeps changes staged, mixed unstages them, hard deletes all changes in working tree.
    To revert a merge?
    +
    Use git revert -m 1 to undo a merged commit safely.
    To squash commits?
    +
    Use interactive rebase: git rebase -i HEAD~n and mark commits as squash or fixup.
    To view changes before committing?
    +
    Use git status and git diff to inspect changes in files.
    To view git commit history?
    +
    Use git log or git log --oneline for concise history. Tools like GitKraken or GitHub history visualize commits.

    Git Tags & Releases

    +
    Diffbet lightweight and annotated tags?
    +
    Lightweight is just a pointer; annotated has metadata, tagger info, and can be signed.
    Git lfs?
    +
    Git Large File Storage handles large files (images, videos) by storing pointers in Git while actual files reside elsewhere.
    Git submodules?
    +
    Submodules allow embedding one Git repo inside another while keeping histories separate.
    To create and push tags?
    +
    git tag -a v1.0 -m "Release" → git push origin v1.0
    To revert a pushed commit?
    +
    Use git revert to create a new commit that undoes changes without rewriting history.

    Git

    +
    Branching in git?
    +
    Branching allows multiple lines of development in the same repo. It enables feature development without affecting the main branch.
    Conflict in git?
    +
    A conflict occurs when multiple changes in the same file/line cannot be merged automatically. Manual resolution is required.
    Detached head in git?
    +
    Detached HEAD occurs when HEAD points directly to a commit not a branch.
    Diffbet a local and a remote repository?
    +
    Local repository exists on your machine; remote repository exists on a server (like GitHub) for collaboration.
    Diffbet feature branch and main/master branch?
    +
    Feature branch is for new work; main/master is stable production-ready code.
    Diffbet git and svn?
    +
    Git is distributed allowing full local repositories and offline work; SVN is centralized and requires server access for most operations.
    Diffbet git fetch and git pull?
    +
    Git fetch downloads updates from remote but doesn’t merge; Git pull downloads and merges changes.
    Diffbet git fetch and git pull?
    +
    git fetch downloads changes but doesn’t merge, while git pull fetches and merges into the current branch.
    Diffbet git merge and git cherry-pick?
    +
    Merge combines branches; cherry-pick applies a specific commit to the current branch.
    Diffbet git merge and git rebase?
    +
    Merge combines branches with a merge commit; rebase applies changes on top of another branch creating a linear history.
    Diffbet git merge and git rebase?
    +
    Merge combines branches into one with a merge commit. Rebase applies commits linearly, creating a cleaner history.
    Diffbet git pull request and merge request?
    +
    Pull request is GitHub terminology; merge request is GitLab/Bitbucket terminology.
    Diffbet git push and git pull?
    +
    Push uploads changes; pull downloads and merges changes from remote.
    Diffbet git reset --soft
    +
    --mixed and --hard? --soft moves HEAD without touching staging/working; --mixed resets staging; --hard resets staging and working directory.
    Diffbet git submodule and subtree?
    +
    Submodule links external repo separately; subtree integrates the external repo into the main repo.
    Diffbet github and gitlab?
    +
    GitHub focuses on Git hosting and community; GitLab offers Git hosting plus integrated CI/CD and DevOps tools.
    Diffbet global and local git config?
    +
    Global config applies to all repositories; local config applies to a specific repository.
    Diffbet lightweight and annotated tags?
    +
    Lightweight tag is just a pointer; annotated tag contains metadata like author date and message.
    Git add?
    +
    Git add stages changes in the working directory to be included in the next commit.
    Git archive --format=zip?
    +
    Creates a zip file of repository content at a specific commit.
    Git archive?
    +
    Git archive creates a zip or tar of a specific commit or branch.
    Git bisect bad?
    +
    Marks a commit as bad during bisect.
    Git bisect good?
    +
    Marks a commit as good during bisect.
    Git bisect start?
    +
    Begins a bisect session to find a bad commit.
    Git bisect?
    +
    Git bisect finds the commit that introduced a bug using binary search.
    Git blame -l?
    +
    Shows annotations for a specific line range in a file.
    Git blame?
    +
    Git blame shows which user last modified each line of a file.
    Git branch?
    +
    Git branch is a pointer to a commit used to develop features independently.
    Git checkout -b?
    +
    Creates a new branch and switches to it.
    Git checkout?
    +
    Git checkout switches branches or restores files in the working directory.
    Git cherry?
    +
    Git cherry shows commits in one branch that are not in another.
    Git clean?
    +
    Git clean removes untracked files from the working directory.
    Git clone?
    +
    Git clone creates a copy of a remote repository on your local machine.
    Git commit --amend?
    +
    Modifies the last commit with new changes or message.
    Git commit?
    +
    Git commit saves changes in the local repository with a descriptive message.
    Git config --list?
    +
    Displays all Git configuration settings.
    Git config?
    +
    Git config sets configuration values like username email and editor.
    Git describe?
    +
    Git describe generates a human-readable name for a commit using nearest tag.
    Git diff head?
    +
    Shows differences between working directory and last commit.
    Git diff origin/main?
    +
    Shows differences between local and remote main branch.
    Git diff --staged?
    +
    Shows differences between staged changes and the last commit.
    Git diff?
    +
    Git diff shows differences between working directory staging area and commits.
    Git fast-forward merge?
    +
    Fast-forward merge moves the branch pointer forward when no divergent commits exist.
    Git fetch --all?
    +
    Fetches all branches from all remotes.
    Git fetch origin branch_name?
    +
    Fetches a specific branch from a remote.
    Git filter-branch?
    +
    Rewrites Git history typically for removing sensitive data.
    Git gc?
    +
    Git garbage collection cleans unnecessary files and optimizes repository.
    Git head?
    +
    HEAD points to the current branch’s latest commit.
    Git hook?
    +
    Git hooks are scripts that run automatically at certain Git events (pre-commit post-commit etc.).
    Git ignore?
    +
    .gitignore specifies files or directories Git should ignore.
    Git log --graph?
    +
    Displays commit history as an ASCII graph.
    Git log --oneline?
    +
    Shows commit history in a concise one-line format per commit.
    Git log --stat?
    +
    Shows commit history with file changes statistics.
    Git log?
    +
    Git log shows the commit history in a repository.
    Git ls-files?
    +
    Lists tracked files in the repository.
    Git merge conflict?
    +
    Merge conflict occurs when Git cannot automatically reconcile differences between branches.
    Git mv?
    +
    Git mv moves or renames a file and stages the change.
    Git notes?
    +
    Git notes attach arbitrary metadata to commits.
    Git origin?
    +
    Origin is the default name for a remote repository when cloned.
    Git prune?
    +
    Git prune removes unreachable objects from the repository.
    Git pull --ff-only?
    +
    Pulls changes only if a fast-forward merge is possible.
    Git pull --rebase?
    +
    Pulls remote changes and rebases local commits on top.
    Git pull request?
    +
    Pull request is a method to propose changes from one branch to another reviewed before merging.
    Git push origin --delete?
    +
    Deletes a remote branch or tag.
    Git push?
    +
    Git push uploads commits from local repository to a remote repository.
    Git rebase interactive?
    +
    Interactive rebase allows editing reordering squashing or removing commits.
    Git reflog delete?
    +
    Removes specific entries from reflog.
    Git reflog expire?
    +
    Cleans old entries from the reflog.
    Git reflog s--all?
    +
    Shows reflog for all references.
    Git reflog show?
    +
    Displays reference log of commits.
    Git reflog?
    +
    Git reflog shows the history of HEAD and branch updates including resets.
    Git remote add?
    +
    Adds a new remote repository reference.
    Git remote remove?
    +
    Removes a remote repository reference.
    Git remote set-url?
    +
    Changes the URL of a remote repository.
    Git remote -v?
    +
    Shows URLs of remote repositories for fetch and push operations.
    Git remote?
    +
    Git remote is a reference to a remote repository.
    Git repository?
    +
    A repository (repo) is a directory that contains your project files and a .git folder tracking changes.
    Git reset head?
    +
    Unstages changes from staging area.
    Git reset?
    +
    Git reset undoes commits or changes optionally moving the HEAD pointer.
    Git revert -n?
    +
    Reverts changes without committing immediately.
    Git revert?
    +
    Git revert creates a new commit that undoes changes from a previous commit.
    Git rev-parse?
    +
    Resolves Git revisions to SHA-1 hashes.
    Git rm?
    +
    Git rm removes files from working directory and staging area.
    Git shortlog -n?
    +
    Shows authors ranked by commit count.
    Git shortlog -s?
    +
    Displays commit count per author.
    Git shortlog?
    +
    Git shortlog summarizes commits by author.
    Git sparse-checkout?
    +
    Sparse checkout allows checking out only part of a repository.
    Git squash?
    +
    Squash combines multiple commits into one for cleaner history.
    Git stash apply?
    +
    Git stash apply restores stashed changes without removing them from the stash list.
    Git stash branch?
    +
    Creates a new branch from a stash.
    Git stash list?
    +
    Lists all stashed changes.
    Git stash pop?
    +
    Git stash pop restores stashed changes and removes them from the stash list.
    Git stash?
    +
    Git stash temporarily shelves changes in the working directory to clean the workspace.
    Git stash?
    +
    git stash temporarily saves uncommitted changes to apply later without committing.
    Git status?
    +
    Git status shows the current state of the working directory and staging area.
    Git submodule?
    +
    Submodule allows including one Git repository inside another.
    Git tag -a?
    +
    Creates an annotated tag with metadata.
    Git tag -d?
    +
    Deletes a local tag.
    Git tag --list?
    +
    Lists all tags in the repository.
    Git tag?
    +
    Git tag marks specific commits as important often used for releases.
    Git tag?
    +
    Tags mark specific points in history, usually used for releases or milestones.
    Git workflow?
    +
    Git workflow is a set of rules or practices for managing branches and collaboration.
    Git worktree?
    +
    Git worktree allows multiple working directories for the same repository.
    Git?
    +
    Git is a distributed version control system used to track changes in source code during software development.
    Git?
    +
    Git is a distributed version control system for tracking code changes, supporting branching, merging, and collaboration.
    Github?
    +
    GitHub is a web-based platform for hosting Git repositories and collaboration.
    Gitlab?
    +
    GitLab is a web-based DevOps platform with Git repository hosting CI/CD and more.
    Popular git workflows?
    +
    Git Flow GitHub Flow and GitLab Flow.
    Pull request workflow?
    +
    Developers push changes → create PR → reviewers approve → merge into main branch. Ensures code quality and collaboration.
    To resolve git conflicts?
    +
    Open conflicting files → edit changes → git add resolved files → git commit.
    You resolve merge conflicts in git?
    +
    Manually edit files mark as resolved then commit the changes.

    GitHub Actions

    +
    Action?
    +
    Reusable code that performs a specific task in a workflow step.
    Github actions?
    +
    GitHub’s native CI/CD platform to automate workflows on Git events.
    Job in github actions?
    +
    A unit of work in a workflow, which can run on specified runners.
    Matrix builds?
    +
    Run a job in parallel across multiple OS, language, or dependency versions.
    Runner in github actions?
    +
    Server that executes workflows. Can be GitHub-hosted or self-hosted.
    Step in github actions?
    +
    An individual task inside a job, like running a script or command.
    Workflow syntax in github actions?
    +
    Workflows are YAML files defining on, jobs, steps, and runs-on properties.
    Workflow?
    +
    A set of automated steps triggered by events in the repository (push, pull request, schedule).
    You trigger github actions?
    +
    On push, pull requests, schedule, release, or manual dispatch events.
    You use secrets in github actions?
    +
    Store credentials in repository secrets and access them as environment variables in workflows.

    GitHub

    +
    Diffbet github and gitlab?
    +
    GitHub focuses on public and private repo hosting with Actions for CI/CD. GitLab is devops lifecycle complete, offering CI/CD, issue tracking, and container registry.
    Fork in github?
    +
    A fork is a personal copy of someone else’s repository. Changes can be pushed to your fork and later submitted as a pull request to the original repo.
    Github actions?
    +
    A CI/CD workflow tool integrated with GitHub. Actions automate tasks like build, test, and deploy on events such as push or PR.
    Github?
    +
    GitHub is a cloud-based Git repository hosting service. It provides version control, collaboration, pull requests, issues, and CI/CD via GitHub Actions.
    To create a github repository?
    +
    Sign in → Click New Repository → Provide name, description, visibility → Initialize with README → Create.

    GitLab CI/CD

    +
    .gitlab-ci.yml?
    +
    A YAML file defining jobs, stages, scripts, and pipelines for GitLab CI/CD.
    Artifacts in gitlab ci/cd?
    +
    Files generated by a job and stored for later stages, like binaries or reports.
    Cache in gitlab ci/cd?
    +
    Caches files between jobs or pipelines to speed up builds (e.g., dependencies).
    Environment in gitlab ci/cd?
    +
    Defines deployment targets like staging, production, or testing with URLs and variables.
    Gitlab ci/cd?
    +
    A built-in CI/CD system in GitLab for automating build, test, and deployment pipelines.
    Gitlab runners?
    +
    Agents that execute CI/CD jobs on specified environments (shared or specific runners).
    Job in gitlab ci/cd?
    +
    A unit of work executed in a stage, containing scripts and conditions for execution.
    Stages in gitlab ci/cd?
    +
    Logical phases of pipeline execution like build, test, deploy, or cleanup.
    You handle secrets in gitlab ci/cd?
    +
    Use CI/CD variables or GitLab Vault integrations to securely manage credentials.
    You trigger a gitlab pipeline?
    +
    Via push events, merge requests, scheduled pipelines, or API calls.

    GitLab

    +
    Diffbet gitlab and github?
    +
    GitLab offers built-in CI/CD, pipelines, and issue management, while GitHub focuses on code hosting and GitHub Actions for CI/CD.
    Gitlab runners?
    +
    GitLab Runners execute CI/CD jobs defined in .gitlab-ci.yml. They can be shared or specific to a project.
    Gitlab?
    +
    GitLab is a web-based Git repository manager providing CI/CD, issue tracking, project management, and DevOps features in one platform.
    Merge request in gitlab?
    +
    Equivalent of pull requests, merge requests let you review and merge code from a feature branch into the main branch.
    To secure gitlab repositories?
    +
    Use branch protection, access controls, MFA, deploy keys, and GitLab CI/CD secrets for security.

    GraphQL

    +
    graphql best used?
    +
    GraphQL is ideal for applications needing flexible data fetching, real-time updates, and complex relationships. Popular in mobile apps, dashboards, and microservices.
    Apollo client?
    +
    Apollo Client is a popular GraphQL client for fetching and caching data. It simplifies state management and GraphQL API communication. Often used with React.
    Apollo server?
    +
    Apollo Server is a GraphQL server implementation for Node.js. It allows building schemas, resolvers, and handling API execution. It integrates well with Express and microservices.
    Can graphql be used with microservices?
    +
    Yes, GraphQL is often used as a gateway for microservices. Federation and stitching combine multiple services seamlessly into one schema.
    Developed graphql?
    +
    GraphQL was developed by Meta (Facebook) in 2012 and open-sourced in 2015. It helps handle complex data structures efficiently. Today, it is widely used in modern web applications.
    Diffbet rest & graphql?
    +
    REST uses multiple endpoints while GraphQL uses a single endpoint. REST may overfetch or underfetch, while GraphQL returns only requested fields. GraphQL offers real-time subscriptions; REST usually doesn’t.
    Does graphql support caching?
    +
    GraphQL itself doesn't provide caching, but clients like Apollo and Relay support it. Caching reduces unnecessary network calls. Server-side caching can also be applied.
    Does graphql support file uploads?
    +
    GraphQL supports uploads using multipart requests or libraries such as Apollo Upload. It requires additional handling since it's not built-in natively.
    Does graphql work over http?
    +
    Yes, GraphQL works over HTTP POST or GET. It is protocol-agnostic and can also run over WebSockets. It integrates easily with existing HTTP infrastructure.
    Graphiql?
    +
    GraphiQL is an IDE for building and testing GraphQL queries. It provides a playground-like environment. It automatically provides schema documentation.
    Graphql batch requesting?
    +
    Batch requesting allows sending multiple queries in a single network request. This reduces overhead and improves performance. Useful in microservices and mobile apps.
    Graphql federation?
    +
    Federation enables multiple GraphQL services to work as one unified graph. It supports distributed data ownership and scalability. Useful in microservice architecture.
    Graphql gateway?
    +
    A gateway orchestrates and aggregates multiple GraphQL services behind one endpoint. It handles authentication, routing, and caching. Often used with microservices.
    Graphql n+1 problem?
    +
    It occurs when resolvers make repeated database calls for nested fields. Tools like DataLoader help batch requests and prevent inefficiency.
    Graphql validations?
    +
    Validation ensures correct syntax, field existence, and type matching before execution. It prevents runtime errors and improves API stability. It is handled automatically by schema rules.
    Graphql?
    +
    GraphQL is a query language for APIs that allows clients to request only required data. It serves as an alternative to REST. It reduces overfetching and underfetching issues.
    Introspection in graphql?
    +
    Introspection enables clients to query schema metadata. It helps tools auto-generate documentation. It makes GraphQL self-descriptive.
    Is graphql replacing rest?
    +
    GraphQL does not replace REST entirely but complements it. REST works well for simple and public APIs. GraphQL is preferred for complex and data-driven applications.
    Is graphql strongly typed?
    +
    Yes, GraphQL uses a strongly typed schema. Each field must have a defined type, ensuring predictable responses and validation.
    Is versioning handled in graphql?
    +
    GraphQL typically avoids versioning by evolving schemas gradually. Fields can be deprecated without breaking clients. This reduces version overhead.
    Mutation in graphql?
    +
    Mutations are used for creating, updating, or deleting data. They change server-side state. Mutations are similar to POST, PUT, or DELETE in REST.
    Overfetching?
    +
    Overfetching occurs when an API returns more data than needed. It is common in REST fixed endpoints. GraphQL prevents overfetching by targeting specific fields.
    Query in graphql?
    +
    Query fetches data from a graphql server. it allows clients to specify exactly what fields they need. the response matches the query structure.
    Relay?
    +
    Relay is a GraphQL client developed by Meta. It focuses on performance and caching with strict conventions. Appears mostly in large-scale apps.
    Resolver in graphql?
    +
    Resolvers are functions that handle requests and return data for a specific field. They act like controllers in REST. Each field in a schema can have its own resolver.
    Scalars?
    +
    Scalars represent primitive data types like String, Int, Boolean, and Float. They are the base building blocks of a schema. Custom scalars can also be created.
    Schema in graphql?
    +
    A schema defines the structure of data and operations available in GraphQL. It includes types, queries, and mutations. It acts as a contract between client and server.
    Subscriptions in graphql?
    +
    Subscriptions enable real-time communication using WebSockets. They push updates automatically when data changes. Useful for chat apps and live notifications.
    Type in graphql?
    +
    Types define the shape of objects in GraphQL. Examples include scalar types like Int and String, or custom object types. They help ensure strong typing.
    Underfetching?
    +
    Underfetching means an API returns insufficient data, requiring multiple calls. REST often suffers from this issue in nested data. GraphQL eliminates underfetching via flexible queries.

    Notes in Images

    +
    📌 .NET
    +
    File_134 File_133 File_121 File_120 File_118 File_113 File_108 File_79 File_70
    📌 AI
    +
    File_123 File_86
    📌 API
    +
    File_135 File_115 File_114 File_112 File_111 File_106 File_89 File_82 File_77 File_75 File_43
    📌 Architecture
    +
    File_90 File_119 File_54 File_48 File_46 File_45 File_37 File_25 File_107 File_88 File_85 File_83 File_76 File_66 File_64 File_18 File_6 File_80
    📌 CI/CD
    +
    File_67 File_47 File_40 File_28
    📌 Cloud
    +
    File_100 File_57 File_56 File_55 File_53 File_41 File_35 File_30 File_29 File_26 File_23
    📌 Creatio
    +
    File_122
    📌 Database
    +
    File_117 File_116 File_78 File_74 File_61
    📌 DevOps
    +
    File_81 File_72 File_63 File_52 File_38 File_12 File_9
    📌 Docker
    +
    File_68 File_65 File_62 File_60 File_59 File_44 File_24 File_10 File_8
    📌 Git
    +
    File_109 File_58 File_51 File_42 File_13
    📌 Jenkins
    +
    File_21 File_20 File_17 File_14 File_5 File_3 File_2
    📌 JSON Web Token (JWT)
    +
    File_110
    📌 Kubernetes
    +
    File_103 File_73 File_69 File_50 File_49 File_39 File_36 File_34 File_33 File_32 File_31 File_27 File_22 File_16 File_15 File_11
    📌 Microservices
    +
    File_91 File_105 File_104 File_102 File_101 File_99 File_98 File_97 File_96 File_95 File_94 File_93 File_92 File_7 File_1 File_87 File_84 File_71
    📌 Terraform
    +
    File_131 File_130 File_125 File_126 File_127 File_128 File_129 File_132